Retire repository

Fuel (from openstack namespace) and fuel-ccp (in x namespace)
repositories are unused and ready to retire.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011647.html

Depends-On: https://review.opendev.org/699362
Change-Id: I10917e42829b459c41ca46514faa72fc46abf7be
This commit is contained in:
Andreas Jaeger 2019-12-18 09:52:23 +01:00
parent 768ac74a42
commit e61cc055ab
658 changed files with 10 additions and 158335 deletions

36
.gitignore vendored
View File

@ -1,36 +0,0 @@
*.pyc
*.sqlite
*.gem
# vim swap files
.*.swp
# services' runtime files
*.log
*.pid
# Vagrant housekeeping file
.vagrant
build
dist
/local_mirror
nosetests.xml
nailgun.log
lock
*.egg
.testrepository
.tox
.venv
.idea
.DS_Store
test_run/*
.cache
*.egg-info
fuel-web-venv
.bashrc

176
LICENSE
View File

@ -1,176 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.

View File

@ -1,145 +0,0 @@
---
description:
For Fuel team structure and contribution policy, see [1].
This is repository level MAINTAINERS file. All contributions to this
repository must be approved by one or more Core Reviewers [2].
If you are contributing to files (or create new directories) in
root folder of this repository, please contact Core Reviewers for
review and merge requests.
If you are contributing to subfolders of this repository, please
check 'maintainers' section of this file in order to find maintainers
for those specific modules.
It is mandatory to get +1 from one or more maintainers before asking
Core Reviewers for review/merge in order to decrease a load on Core Reviewers [3].
Exceptions are when maintainers are actually cores, or when maintainers
are not available for some reason (e.g. on vacation).
[1] https://specs.openstack.org/openstack/fuel-specs/policy/team-structure
[2] https://review.openstack.org/#/admin/groups/664,members
[3] http://lists.openstack.org/pipermail/openstack-dev/2015-August/072406.html
Please keep this file in YAML format in order to allow helper scripts
to read this as a configuration data.
maintainers:
- debian/: &packaging_team
- name: Mikhail Ivanov
email: mivanov@mirantis.com
IRC: mivanov
- name: Artem Silenkov
email: asilenkov@mirantis.com
IRC: asilenkov
- name: Alexander Tsamutali
email: atsamutali@mirantis.com
IRC: astsmtl
- name: Daniil Trishkin
email: dtrishkin@mirantis.com
IRC: dtrishkin
- name: Ivan Udovichenko
email: iudovichenko@mirantis.com
IRC: tlbr
- name: Igor Yozhikov
email: iyozhikov@mirantis.com
IRC: IgorYozhikov
- docs/: &docs_team
- name: Svetlana Karslioglu
email: skarslioglu@mirantis.com
IRC: lanakars
- name: Evgeny Konstantinov
email: evkonstantinov@mirantis.com
IRC: evkonst
- name: Michele Fagan
email: mfagan@mirantis.com
IRC: mfagan
- name: Olga Gusarenko
email: ogusarenko@mirantis.com
IRC: ogusarenko
- name: Olena Logvinova
email: ologvinova@mirantis.com
IRC: ologvinova
- name: Mariia Zlatkova
email: mzlatkova@mirantis.com
IRC: mzlatkova
- name: Alexander Adamov
email: aadamov@mirantis.com
IRC: alexadamov
- nailgun/:
- name: Aleksandr Kislitskii
email: akislitsky@mirantis.com
IRC: akislitsky
- name: Alexander Saprykin
email: asaprykin@mirantis.com
IRC: asaprykin
- name: Artem Roma
email: aroma@mirantis.com
IRC: aroma
- name: Artur Svechnikov
email: asvechnikov@mirantis.com
IRC: asvechnikov
- name: Ilya Kutukov
email: ikutukov@mirantis.com
IRC: ikutukov
- name: Nikita Zubkov
email: nzubkov@mirantis.com
IRC: zubchick
- name: Georgy Kibardin
email: gkibardin@mirantis.com
IRC: gkibardin
- nailgun/nailgun/extensions/network_manager/: &network_experts
- name: Aleksei Kasatkin
email: akasatkin@mirantis.com
IRC: akasatkin
- name: Artem Roma
email: aroma@mirantis.com
IRC: aroma
- name: Ryan Moe
email: rmoe@mirantis.com
IRC: rmoe
- nailgun/nailgun/orchestrator/neutron_serializers.py: *network_experts
- nailgun/nailgun/orchestrator/nova_serializers.py: *network_experts
- nailgun/nailgun/fixtures/openstack.yaml: &ui_experts
- name: Vitaly Kramskikh
email: vkramskikh@mirantis.com
IRC: vkramskikh
- name: Julia Aranovich
email: jkirnosova@mirantis.com
IRC: jaranovich
- name: Kate Pimenova
email: kpimenova@mirantis.com
IRC: kpimenova
- name: Nikolay Bogdanov
email: nbogdanov@mirantis.com
IRC: nbogdanov
- specs/: *packaging_team

View File

@ -1,36 +0,0 @@
Team and repository tags
========================
[![Team and repository tags](http://governance.openstack.org/badges/fuel-web.svg)](http://governance.openstack.org/reference/tags/index.html)
<!-- Change things from this point on -->
fuel-web
========
fuel-web (nailgun) implements REST API and deployment data
management. It manages disk volumes configuration data, networks
configuration data and any other environment specific data which
are necessary for successful deployment. It has required
orchestration logic to build instructions for provisioning
and deployment in a right order. Nailgun uses SQL database
to store its data and AMQP service to interact with workers.
-----------------
Project resources
-----------------
Project status, bugs, and blueprints are tracked on Launchpad:
https://launchpad.net/fuel
Development documentation is hosted here:
https://docs.fuel-infra.org/fuel-dev
Any additional information can be found on the Fuel's project wiki
https://wiki.openstack.org/wiki/Fuel
Anyone wishing to contribute to fuel-web should follow the general
OpenStack process. A good reference for it can be found here:
https://wiki.openstack.org/wiki/How_To_Contribute
http://docs.openstack.org/infra/manual/developers.html

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1 +0,0 @@
* * * * * root flock -w 0 -o /var/lock/fencing-agent.lock -c "/opt/nailgun/bin/fencing-agent.rb 2>&1 | tee -a /var/log/fencing-agent.log | /usr/bin/logger -t fencing-agent || true"

View File

@ -1,188 +0,0 @@
#!/usr/bin/env ruby
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
begin
require 'rubygems'
rescue LoadError
end
require 'ohai/system'
require 'logger'
require 'open3'
require 'rexml/document'
unless Process.euid == 0
puts "You must be root"
exit 1
end
ENV['PATH'] = "/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin"
class FenceAgent
def initialize(logger)
@logger = logger
@os = Ohai::System.new()
@os.all_plugins
end
def system_info
{
:fqdn => (@os[:fqdn].strip rescue @os[:hostname].strip rescue nil),
:hostname => (@os[:hostname].strip rescue nil),
}.delete_if { |key, value| value.nil? or value.empty? or value == "Not Specified" }
end
# Check free root space for all nodes in the corosync cluster, if any up and running
# Do not wait or check for the fence actions results, if any were taken (it is in cluster's responsibility)
# TODO report to nailgun if fencing actions were taken
# * return 0, if nodes in the cluster don't need fencing by root free space criteria
# * return 1, if fence action is not applicable atm, e.g. corosync is absent or not accessible yet, or node wasn't yet provisioned
# * return 2, if some nodes has been ordered to fence and all corresponding crm commands were issued to corosync
# * return 3, if some nodes has been ordered to fence, but some of crm commands were not issued for some reasons
def check_and_fence
# Privates
# for unit tests' stubs
def random(s,n)
s+rand(n)
end
# sleep and exec cmd
def exec(cmd,sleep_time)
unless sleep_time.nil? or sleep_time == 0
@logger.info("Sleep #{Process.pid} for #{sleep_time}s, before issuing cmd:#{cmd}")
sleep(sleep_time)
end
Process.fork do
Process.exec(cmd)
end
Process.wait
$?.exitstatus
end
# * return target, if provisioned
# * return bootstrap, if not provisioned yet
def get_system_type(filename)
fl = File.open(filename, "r")
state = fl.readline.rstrip
fl.close
state
end
# * return true, if corosync running and CIB is up
def is_corosync_up
cmd = "/usr/sbin/crm_attribute --type crm_config --query --name dc-version &>/dev/null"
exec(cmd,random(5,10)) == 0
end
# assume is_corosync_up true
# * return xml with free root space data from CIB, or nil
def get_free_root_space_from_CIB
cmd = "/usr/sbin/cibadmin --query --xpath \"//nvpair[@name='root_free']\""
sleep(random(3,5))
REXML::Document.new(Open3.popen3(cmd)[1].read).root.elements['/xpath-query'] rescue nil
end
# assume is_corosync_up true
# * return true, if node is OFFLINE (or not applicable for any actions by corosync cluster services)
def is_offline(fqdn)
cmd = "/usr/sbin/cibadmin --query --xpath \"//node_state[@uname='#{fqdn}']\" | grep -q 'crmd=\"online\"'"
exec(cmd,random(5,10)) > 0
end
# assume is_corosync_up true
# issue fencing action to cluster services for given nodes
# * return 2, if some nodes has been ordered to fence and all crm command has been issued.
# * return 3, if some nodes has been ordered to fence, but some of crm commands was not issued for some reasons.
def fence_nodes(nodes_to_fence)
failed = false
nodes_to_fence.each do |node|
cmd = "/usr/sbin/crm --force node fence #{node}"
if exec(cmd,random(15,15)) > 0
@logger.error("Cannot issue the command: #{cmd}")
failed = true
else
@logger.error("Issued the fence action: #{cmd}")
end
end
return 2 unless failed
3
end
# Start check for cluster's free root space
@logger.debug("Starting cluster free root space check")
if File.exist?("/etc/nailgun_systemtype")
# exit, if node is not provisioned yet
if get_system_type("/etc/nailgun_systemtype") != "target"
@logger.debug("The system state is not 'target' yet, exiting with 1")
return 1
end
else
@logger.debug("The /etc/nailgun_systemtype file is missing, exiting with 1")
return 1
end
# exit, if cibadmin tool doesn't exist yet
unless is_corosync_up
@logger.debug("Corosync is absent or not ready yet, exiting with 1")
return 1
end
# query CIB for nodes' root free space
stanzas = get_free_root_space_from_CIB
if stanzas.nil?
@logger.debug("Free space monitoring resource is not configured yet, exiting with 1")
return 1
end
nodes_to_fence = []
# for every node in the cluster
stanzas.each_element do |e|
items = e.attributes
# get the node's fqdn and free space at root partition from CIB
line = { :fqdn => /^status-(.*)-root_free$/.match(items['id'])[1], :root_free => items['value'] }
# get the node's status from CIB
@logger.debug("Got fqdn:#{line[:fqdn]}, root free space:#{line[:root_free]}G")
# if node is not the agent's one, and node's root free space is zero, and its status is online, add it to the list of nodes must be fenced
cmd = "/usr/sbin/cibadmin --query --xpath \"//node_state[@uname='#{line[:fqdn]}']\" | grep -q 'crmd=\"online\"'"
if line[:root_free].to_i == 0
offline = is_offline(line[:fqdn])
@logger.debug("Ignoring offline node #{line[:fqdn]}") if offline
end
itself = (system_info[:fqdn] == line[:fqdn] or system_info[:name] == line[:fqdn])
@logger.debug("Ignoring my own node #{line[:fqdn]} (cannot shoot myself)") if itself and line[:root_free].to_i == 0
nodes_to_fence.push(line[:fqdn]) unless line[:root_free].to_i > 0 or offline or itself or nodes_to_fence.include?(line[:fqdn])
end
# fence the failed nodes, if any, by random delay (15..30) and report an alert
unless nodes_to_fence.empty?
result = fence_nodes(nodes_to_fence)
@logger.error("Cluster has FAILED free root space check!")
return result
else
@logger.debug("Cluster has PASSED free root space check successfully")
return 0
end
end
end
# skip it, if under unit testing
if $0 == __FILE__
logger = Logger.new(STDOUT)
logger.level = Logger::DEBUG
agent = FenceAgent.new(logger)
begin
agent.check_and_fence
rescue => ex
logger.error "Cluster free root space check cannot be performed: #{ex.message}\n#{ex.backtrace}"
end
end

View File

@ -1,148 +0,0 @@
require 'rubygems'
require 'rspec'
require 'mocha/api'
# stub the root rights for agent script under test
Process.stubs(:euid).returns(0)
# use load for agent script w/o '.rb' extension
require './bin/fencing-agent'
# fixtures
$xml_all_ok = <<END
<xpath-query>
<nvpair id="status-node-7.test.domain.local-root_free" name="root_free" value="5"/>
<nvpair id="status-node-8.test.domain.local-root_free" name="root_free" value="5"/>
<nvpair id="status-node-9.test.domain.local-root_free" name="root_free" value="5"/>
</xpath-query>
END
$xml_need_fence1 = <<END
<xpath-query>
<nvpair id="status-node-7.test.domain.local-root_free" name="root_free" value="5"/>
<nvpair id="status-node-8.test.domain.local-root_free" name="root_free" value="5"/>
<nvpair id="status-node-9.test.domain.local-root_free" name="root_free" value="0"/>
</xpath-query>
END
$xml_need_fence2 = <<END
<xpath-query>
<nvpair id="status-node-7.test.domain.local-root_free" name="root_free" value="0"/>
<nvpair id="status-node-8.test.domain.local-root_free" name="root_free" value="5"/>
<nvpair id="status-node-9.test.domain.local-root_free" name="root_free" value="0"/>
</xpath-query>
END
$fl = StringIO.new("target")
describe FenceAgent do
before :each do
logger = Logger.new(STDOUT)
logger.level = Logger::DEBUG
@agent = FenceAgent.new(logger)
@agent.stubs(:random).returns(0)
File.stub(:exist?).with("/etc/nailgun_systemtype").and_return(true)
File.stub(:open).with("/etc/nailgun_systemtype", "r").and_return($fl)
end
describe "#new" do
it "takes logger and url parameters and returns a nailgun agent instance" do
@agent.should be_an_instance_of FenceAgent
end
end
# Fence daemon tests
describe "#check_and_fence" do
before :each do
@agent.stubs(:is_corosync_up).returns(true)
@agent.stubs(:get_system_type).returns("target")
end
it "Check N/A: should return 1, if system type file is missing" do
File.stub(:exist?).with("/etc/nailgun_systemtype").and_return(false)
@agent.check_and_fence.should eq(1)
end
it "Check N/A: should return 1, if fence action is not applicable because of wrong system type" do
@agent.stubs(:get_system_type).returns("bootstrap")
@agent.check_and_fence.should eq(1)
end
it "Check N/A: should return 1, if corosync is not ready" do
@agent.stub(:is_corosync_up).and_return(false)
@agent.check_and_fence.should eq(1)
end
it "Check N/A: should return 1, if none of free space monitoring ocf resources ready" do
@agent.stubs(:get_free_root_space_from_CIB).returns(nil)
@agent.check_and_fence.should eq(1)
end
it "Check PASSED: should return 0, if nodes in the cluster don't need fencing by root free space criteria" do
@agent.stubs(:get_free_root_space_from_CIB).returns(REXML::Document.new($xml_all_ok).root.elements['/xpath-query'])
@agent.check_and_fence.should eq(0)
end
it "Check FAILED: if one node must be fenced and is online, should issue fence command to corosync and return 2" do
@agent.stubs(:get_free_root_space_from_CIB).returns(REXML::Document.new($xml_need_fence1).root.elements['/xpath-query'])
expected_node = "node-9.test.domain.local"
expected_nodes = [ expected_node ]
@agent.stub(:exec).with(
"/usr/sbin/cibadmin --query --xpath \"//node_state[@uname='#{expected_node}']\" | grep -q 'crmd=\"online\"'"
).and_return(0)
@agent.stub(:exec).with(
"/usr/sbin/crm --force node fence #{expected_node}"
).and_return(0)
@agent.should_receive(:is_offline).with(expected_node).exactly(1).times.and_return(false)
@agent.should_receive(:fence_nodes).with(expected_nodes).exactly(1).times.and_return(2)
@agent.check_and_fence.should eq(2)
end
it "Check FAILED: if some nodes must be fenced and are online, should issue fence commands to corosync and return 2" do
@agent.stubs(:get_free_root_space_from_CIB).returns(REXML::Document.new($xml_need_fence2).root.elements['/xpath-query'])
expected_node1 = "node-7.test.domain.local"
expected_node2 = "node-9.test.domain.local"
expected_nodes = [ expected_node1, expected_node2 ]
expected_nodes.each do |node|
@agent.stub(:exec).with(
"/usr/sbin/cibadmin --query --xpath \"//node_state[@uname='#{node}']\" | grep -q 'crmd=\"online\"'"
).and_return(0)
@agent.stub(:exec).with(
"/usr/sbin/crm --force node fence #{node}"
).and_return(0)
@agent.should_receive(:is_offline).with(node).exactly(1).times.and_return(false)
end
@agent.should_receive(:fence_nodes).with(expected_nodes).exactly(1).times.and_return(2)
@agent.check_and_fence.should eq(2)
end
it "Check FAILED: should return 3, if some nodes are online and has been ordered to fence, but some of crm commands were not issued for some reasons" do
@agent.stubs(:get_free_root_space_from_CIB).returns(REXML::Document.new($xml_need_fence2).root.elements['/xpath-query'])
expected_node1 = "node-7.test.domain.local"
expected_node2 = "node-9.test.domain.local"
expected_nodes = [ expected_node1, expected_node2 ]
expected_nodes.each do |node|
@agent.stub(:exec).with(
"/usr/sbin/cibadmin --query --xpath \"//node_state[@uname='#{node}']\" | grep -q 'crmd=\"online\"'"
).and_return(0)
@agent.should_receive(:is_offline).with(node).exactly(1).times.and_return(false)
end
@agent.stub(:exec).with(
"/usr/sbin/crm --force node fence #{expected_node1}"
).and_return(0)
@agent.stub(:exec).with(
"/usr/sbin/crm --force node fence #{expected_node2}"
).and_return(6)
@agent.should_receive(:fence_nodes).with(expected_nodes).exactly(1).times.and_return(3)
@agent.check_and_fence.should eq(3)
end
it "Check consider PASSED behavior: should exclude itself from the fencing and return 0" do
@agent.stubs(:get_free_root_space_from_CIB).returns(REXML::Document.new($xml_need_fence1).root.elements['/xpath-query'])
@agent.stubs(:is_offline).returns(false)
@agent.stubs(:system_info).returns({ :fqdn => 'node-9.test.domain.local', :name => 'node-9' })
@agent.check_and_fence.should eq(0)
end
it "Check consider PASSED behavior: should exclude offline nodes from the fencing and return 0" do
@agent.stubs(:get_free_root_space_from_CIB).returns(REXML::Document.new($xml_need_fence1).root.elements['/xpath-query'])
@agent.stubs(:is_offline).returns(true)
@agent.check_and_fence.should eq(0)
end
end
end

View File

@ -1,505 +0,0 @@
#!/usr/bin/env python
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import logging
from logging.handlers import SysLogHandler
from optparse import OptionParser
import os
import re
import signal
import sys
import time
# Add syslog levels to logging module.
logging.NOTICE = 25
logging.ALERT = 60
logging.EMERG = 70
logging.addLevelName(logging.NOTICE, 'NOTICE')
logging.addLevelName(logging.ALERT, 'ALERT')
logging.addLevelName(logging.EMERG, 'EMERG')
SysLogHandler.priority_map['NOTICE'] = 'notice'
SysLogHandler.priority_map['ALERT'] = 'alert'
SysLogHandler.priority_map['EMERG'] = 'emerg'
# Define data and message format according to RFC 5424.
rfc5424_format = '{version} {timestamp} {hostname} {appname} {procid}'\
' {msgid} {structured_data} {msg}'
date_format = '%Y-%m-%dT%H:%M:%SZ'
# Define global semaphore.
sending_in_progress = 0
# Define file types.
msg_levels = {'ruby': {'regex': '(?P<level>[DIWEF]), \[[0-9-]{10}T',
'levels': {'D': logging.DEBUG,
'I': logging.INFO,
'W': logging.WARNING,
'E': logging.ERROR,
'F': logging.FATAL
}
},
'syslog': {'regex': ('[0-9-]{10}T[0-9:]{8}Z (?P<level>'
'debug|info|notice|warning|err|crit|'
'alert|emerg)'),
'levels': {'debug': logging.DEBUG,
'info': logging.INFO,
'notice': logging.NOTICE,
'warning': logging.WARNING,
'err': logging.ERROR,
'crit': logging.CRITICAL,
'alert': logging.ALERT,
'emerg': logging.EMERG
}
},
'anaconda': {'regex': ('[0-9:]{8},[0-9]+ (?P<level>'
'DEBUG|INFO|WARNING|ERROR|CRITICAL)'),
'levels': {'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL
}
},
'netprobe': {'regex': ('[0-9-]{10} [0-9:]{8},[0-9]+ (?P<level>'
'DEBUG|INFO|WARNING|ERROR|CRITICAL)'),
'levels': {'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL
}
}
}
relevel_errors = {
'anaconda': [
{
'regex': 'Error downloading \
http://.*/images/(product|updates).img: HTTP response code said error',
'levelfrom': logging.ERROR,
'levelto': logging.WARNING
},
{
'regex': 'got to setupCdrom without a CD device',
'levelfrom': logging.ERROR,
'levelto': logging.WARNING
}
]
}
# Create a main logger.
logging.basicConfig(format='%(levelname)s: %(message)s')
main_logger = logging.getLogger()
main_logger.setLevel(logging.NOTSET)
class WatchedFile:
"""WatchedFile(filename) => Object that read lines from file if exist."""
def __init__(self, name):
self.name = name
self.fo = None
self.where = 0
def reset(self):
if self.fo:
self.fo.close()
self.fo = None
self.where = 0
def _checkRewrite(self):
try:
if os.stat(self.name)[6] < self.where:
self.reset()
except OSError:
self.close()
def readLines(self):
"""Return list of last append lines from file if exist."""
self._checkRewrite()
if not self.fo:
try:
self.fo = open(self.name, 'r')
except IOError:
return ()
lines = self.fo.readlines()
self.where = self.fo.tell()
return lines
def close(self):
self.reset()
class WatchedGroup:
"""Can send data from group of specified files to specified servers."""
def __init__(self, servers, files, name):
self.servers = servers
self.files = files
self.log_type = files.get('log_type', 'syslog')
self.name = name
self._createLogger()
def _createLogger(self):
self.watchedfiles = []
logger = logging.getLogger(self.name)
logger.setLevel(logging.NOTSET)
logger.propagate = False
# Create log formatter.
format_dict = {'version': '1',
'timestamp': '%(asctime)s',
'hostname': config['hostname'],
'appname': self.files['tag'],
'procid': '-',
'msgid': '-',
'structured_data': '-',
'msg': '%(message)s'
}
log_format = rfc5424_format.format(**format_dict)
formatter = logging.Formatter(log_format, date_format)
# Add log handler for each server.
for server in self.servers:
port = 'port' in server and server['port'] or 514
syslog = SysLogHandler((server["host"], port))
syslog.setFormatter(formatter)
logger.addHandler(syslog)
self.logger = logger
# Create WatchedFile objects from list of files.
for name in self.files['files']:
self.watchedfiles.append(WatchedFile(name))
def send(self):
"""Send append data from files to servers."""
for watchedfile in self.watchedfiles:
for line in watchedfile.readLines():
line = line.strip()
level = self._get_msg_level(line, self.log_type)
# Get rid of duplicated information in anaconda logs
line = re.sub(
msg_levels[self.log_type]['regex'] + "\s*:?\s?",
"",
line
)
# Ignore meaningless errors
try:
for r in relevel_errors[self.log_type]:
if level == r['levelfrom'] and \
re.match(r['regex'], line):
level = r['levelto']
except KeyError:
pass
self.logger.log(level, line)
main_logger and main_logger.log(
level,
'From file "%s" send: %s' % (watchedfile.name, line)
)
@staticmethod
def _get_msg_level(line, log_type):
if log_type in msg_levels:
msg_type = msg_levels[log_type]
regex = re.match(msg_type['regex'], line)
if regex:
return msg_type['levels'][regex.group('level')]
return logging.INFO
def sig_handler(signum, frame):
"""Send all new data when signal arrived."""
if not sending_in_progress:
send_all()
exit(signum)
else:
config['run_once'] = True
def send_all():
"""Send any updates."""
for group in watchlist:
group.send()
def main_loop():
"""Periodicaly call sendlogs() for each group in watchlist."""
signal.signal(signal.SIGINT, sig_handler)
signal.signal(signal.SIGTERM, sig_handler)
while watchlist:
time.sleep(0.5)
send_all()
# If asked to run_once, exit now
if config['run_once']:
break
class Config:
"""Collection of config generation methods.
Usage: config = Config.getConfig()
"""
@classmethod
def getConfig(cls):
"""Generate config from command line arguments and config file."""
# example_config = {
# "daemon": True,
# "run_once": False,
# "debug": False,
# "watchlist": [
# {"servers": [ {"host": "localhost", "port": 514} ],
# "watchfiles": [
# {"tag": "anaconda",
# "log_type": "anaconda",
# "files": ["/tmp/anaconda.log",
# "/mnt/sysimage/root/install.log"]
# }
# ]
# }
# ]
# }
default_config = {"daemon": True,
"run_once": False,
"debug": False,
"hostname": cls._getHostname(),
"watchlist": []
}
# First use default config as running config.
config = dict(default_config)
# Get command line options and validate it.
cmdline = cls.cmdlineParse()[0]
# Check config file source and read it.
if cmdline.config_file or cmdline.stdin_config:
try:
if cmdline.stdin_config is True:
fo = sys.stdin
else:
fo = open(cmdline.config_file, 'r')
parsed_config = json.load(fo)
if cmdline.debug:
print(parsed_config)
except IOError: # Raised if IO operations failed.
main_logger.error("Can not read config file %s\n" %
cmdline.config_file)
exit(1)
except ValueError as e: # Raised if json parsing failed.
main_logger.error("Can not parse config file. %s\n" %
e.message)
exit(1)
# Validate config from config file.
cls.configValidate(parsed_config)
# Copy gathered config from config file to running config
# structure.
for key, value in parsed_config.items():
config[key] = value
else:
# If no config file specified use watchlist setting from
# command line.
watchlist = {"servers": [{"host": cmdline.host,
"port": cmdline.port}],
"watchfiles": [{"tag": cmdline.tag,
"log_type": cmdline.log_type,
"files": cmdline.watchfiles}]}
config['watchlist'].append(watchlist)
# Apply behavioural command line options to running config.
if cmdline.no_daemon:
config["daemon"] = False
if cmdline.run_once:
config["run_once"] = True
if cmdline.debug:
config["debug"] = True
return config
@staticmethod
def _getHostname():
"""Generate hostname by BOOTIF kernel option or use os.uname()."""
with open('/proc/cmdline') as fo:
cpu_cmdline = fo.readline().strip()
regex = re.search('(?<=BOOTIF=)([0-9a-fA-F-]*)', cpu_cmdline)
if regex:
mac = regex.group(0).upper()
return ''.join(mac.split('-'))
return os.uname()[1]
@staticmethod
def cmdlineParse():
"""Parse command line config options."""
parser = OptionParser()
parser.add_option("-c", "--config", dest="config_file", metavar="FILE",
help="Read config from FILE.")
parser.add_option("-i", "--stdin", dest="stdin_config", default=False,
action="store_true", help="Read config from Stdin.")
# FIXIT Add optionGroups.
parser.add_option("-r", "--run-once", dest="run_once",
action="store_true", help="Send all data and exit.")
parser.add_option("-n", "--no-daemon", dest="no_daemon",
action="store_true", help="Do not daemonize.")
parser.add_option("-d", "--debug", dest="debug",
action="store_true", help="Print debug messages.")
parser.add_option("-t", "--tag", dest="tag", metavar="TAG",
help="Set tag of sending messages as TAG.")
parser.add_option("-T", "--type", dest="log_type", metavar="TYPE",
default='syslog',
help="Set type of files as TYPE"
"(default: %default).")
parser.add_option("-f", "--watchfile", dest="watchfiles",
action="append",
metavar="FILE", help="Add FILE to watchlist.")
parser.add_option("-s", "--host", dest="host", metavar="HOSTNAME",
help="Set destination as HOSTNAME.")
parser.add_option("-p", "--port", dest="port", type="int", default=514,
metavar="PORT",
help="Set remote port as PORT (default: %default).")
options, args = parser.parse_args()
# Validate gathered options.
if options.config_file and options.stdin_config:
parser.error("You must not set both options --config"
" and --stdin at the same time.")
exit(1)
if ((options.config_file or options.stdin_config) and
(options.tag or options.watchfiles or options.host)):
main_logger.warning("If --config or --stdin is set up options"
" --tag, --watchfile, --type,"
" --host and --port will be ignored.")
if (not (options.config_file or options.stdin_config) and
not (options.tag and options.watchfiles and options.host)):
parser.error("Options --tag, --watchfile and --host"
" must be set up at the same time.")
exit(1)
return options, args
@staticmethod
def _checkType(value, value_type, value_name='', msg=None):
"""Check correctness of type of value and exit if not."""
if not isinstance(value, value_type):
message = msg or "Value %r in config have type %r but"\
" %r is expected." %\
(value_name, type(value).__name__, value_type.__name__)
main_logger.error(message)
exit(1)
@classmethod
def configValidate(cls, config):
"""Validate types and names of data items in config."""
cls._checkType(config, dict, msg='Config must be a dict.')
for key in ("daemon", "run_once", "debug"):
if key in config:
cls._checkType(config[key], bool, key)
key = "hostname"
if key in config:
cls._checkType(config[key], basestring, key)
key = "watchlist"
if key in config:
cls._checkType(config[key], list, key)
else:
main_logger.error("There must be key %r in config." % key)
exit(1)
for item in config["watchlist"]:
cls._checkType(item, dict, "watchlist[n]")
key, name = "servers", "watchlist[n] => servers"
if key in item:
cls._checkType(item[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key, '"watchlist[n]" item'))
exit(1)
key, name = "watchfiles", "watchlist[n] => watchfiles"
if key in item:
cls._checkType(item[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key, '"watchlist[n]" item'))
exit(1)
for item2 in item["servers"]:
cls._checkType(item2, dict, "watchlist[n] => servers[n]")
key, name = "host", "watchlist[n] => servers[n] => host"
if key in item2:
cls._checkType(item2[key], basestring, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => servers[n]" item'))
exit(1)
key, name = "port", "watchlist[n] => servers[n] => port"
if key in item2:
cls._checkType(item2[key], int, name)
for item2 in item["watchfiles"]:
cls._checkType(item2, dict, "watchlist[n] => watchfiles[n]")
key, name = "tag", "watchlist[n] => watchfiles[n] => tag"
if key in item2:
cls._checkType(item2[key], basestring, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => watchfiles[n]" item'))
exit(1)
key = "log_type"
name = "watchlist[n] => watchfiles[n] => log_type"
if key in item2:
cls._checkType(item2[key], basestring, name)
key, name = "files", "watchlist[n] => watchfiles[n] => files"
if key in item2:
cls._checkType(item2[key], list, name)
else:
main_logger.error("There must be key %r in %s in config." %
(key,
'"watchlist[n] => watchfiles[n]" item'))
exit(1)
for item3 in item2["files"]:
name = "watchlist[n] => watchfiles[n] => files[n]"
cls._checkType(item3, basestring, name)
# Create global config.
config = Config.getConfig()
# Create list of WatchedGroup objects with different log names.
watchlist = []
i = 0
for item in config["watchlist"]:
for files in item['watchfiles']:
watchlist.append(WatchedGroup(item['servers'], files, str(i)))
i = i + 1
# Fork and loop
if config["daemon"]:
if not os.fork():
# Redirect the standard I/O file descriptors to the specified file.
main_logger = None
DEVNULL = getattr(os, "devnull", "/dev/null")
os.open(DEVNULL, os.O_RDWR) # standard input (0)
os.dup2(0, 1) # Duplicate standard input to standard output (1)
os.dup2(0, 2) # Duplicate standard input to standard error (2)
main_loop()
sys.exit(1)
sys.exit(0)
else:
if not config['debug']:
main_logger = None
main_loop()

View File

@ -1,232 +0,0 @@
#!/bin/sh
# config
VENV='fuel-web-venv'
VIEW='0'
NOINSTALL='0'
HTML='docs/_build/html/index.html'
SINGLEHTML='docs/_build/singlehtml/index.html'
EPUB='docs/_build/epub/Fuel.epub'
LATEXPDF='docs/_build/latex/fuel.pdf'
PDF='docs/_build/pdf/Fuel.pdf'
# functions
check_if_debian() {
test -f '/etc/debian_version'
return $?
}
check_if_redhat() {
test -f '/etc/redhat-release'
return $?
}
check_java_present() {
which java 1>/dev/null 2>/dev/null
return $?
}
check_latex_present() {
which pdflatex 1>/dev/null 2>/dev/null
return $?
}
cd_to_dir() {
FILE="${0}"
DIR=`dirname "${FILE}"`
cd "${DIR}"
if [ $? -gt 0 ]; then
echo "Cannot cd to dir ${DIR}!"
exit 1
fi
}
redhat_prepare_packages() {
# prepare postgresql utils and dev packages
# required to build psycopg Python pgsql library
sudo yum -y install postgresql postgresql-devel
# prepare python tools
sudo yum -y install python-devel make python-pip python-virtualenv
}
debian_prepare_packages() {
# prepare postgresql utils and dev packages
# required to build psycopg Python pgsql library
sudo apt-get -y install postgresql postgresql-server-dev-all
# prepare python tools
sudo apt-get -y install python-dev python-pip make python-virtualenv
}
install_java() {
if check_if_debian; then
sudo apt-get -y install default-jre
elif check_if_redhat; then
sudo yum -y install java
else
echo 'OS is not supported!'
exit 1
fi
}
prepare_packages() {
if check_if_debian; then
debian_prepare_packages
elif check_if_redhat; then
redhat_prepare_packages
else
echo 'OS is not supported!'
exit 1
fi
}
prepare_venv() {
# activate venv
virtualenv "${VENV}" # you can use any name instead of 'fuel'
. "${VENV}/bin/activate" # command selects the particular environment
# install dependencies
pip install -r 'nailgun/test-requirements.txt'
}
download_plantuml() {
if ! [ -f 'docs/plantuml.jar' ]; then
wget 'http://downloads.sourceforge.net/project/plantuml/plantuml.jar' -O 'docs/plantuml.jar'
fi
}
view_file() {
if [ "`uname`" = "Darwin" ]; then
open "${1}"
elif [ "`uname`" = "Linux" ]; then
xdg-open "${1}"
else
echo 'OS is not supported!'
exit 1
fi
}
build_html() {
make -C docs html
if [ "${VIEW}" = '1' ]; then
view_file "${HTML}"
fi
}
build_singlehtml() {
make -C docs singlehtml
if [ "${VIEW}" = '1' ]; then
view_file "${SINGLEHTML}"
fi
}
build_latexpdf() {
check_latex_present
if [ $? -gt 0 ]; then
echo 'You need to install LaTeX if you want to build PDF!'
exit 1
fi
make -C docs latexpdf
if [ "${VIEW}" = '1' ]; then
view_file "${LATEXPDF}"
fi
}
build_epub() {
make -C docs epub
if [ "${VIEW}" = '1' ]; then
view_file "${EPUB}"
fi
}
build_pdf() {
make -C docs pdf
if [ "${VIEW}" = '1' ]; then
view_file "${PDF}"
fi
}
clear_build() {
make -C docs clean
}
show_help() {
cat <<EOF
Documentation build helper
-o - Open generated documentation after build
-c - Clear the build directory
-n - Don't try to install any packages
-f - Documentation format [html,signlehtml,pdf,latexpdf,epub]
EOF
}
# MAIN
while getopts ":onhcf:" opt; do
case $opt in
o)
VIEW='1'
;;
n)
NOINSTALL='1'
;;
h)
show_help
exit 0
;;
c)
clear_build
exit 0
;;
f)
FORMAT="${OPTARG}"
;;
\?)
echo "Invalid option: -$OPTARG" >&2
show_help
exit 1
;;
esac
done
cd_to_dir
check_java_present
if [ $? -gt 0 ]; then
install_java
fi
if [ "${NOINSTALL}" = '0' ]; then
prepare_packages
fi
prepare_venv
download_plantuml
if [ "${FORMAT}" = '' ]; then
FORMAT='html'
fi
case "${FORMAT}" in
html)
build_html
;;
singlehtml)
build_singlehtml
;;
pdf)
build_pdf
;;
latexpdf)
build_latexpdf
;;
epub)
build_epub
;;
*)
echo "Format ${FORMAT} is not supported!"
exit 1
;;
esac

11
debian/changelog vendored
View File

@ -1,11 +0,0 @@
fuel-nailgun (10.0.0-1) trusty; urgency=low
* Bump version to 10.0
-- Sergey Kulanov <skulanov@mirantis.com> Mon, 21 Mar 2016 12:36:45 +0200
fuel-nailgun (9.0.0-1) trusty; urgency=low
* Bump version to 9.0
-- Sergey Kulanov <skulanov@mirantis.com> Thu, 17 Dec 2015 16:52:26 +0200

1
debian/compat vendored
View File

@ -1 +0,0 @@
8

88
debian/control vendored
View File

@ -1,88 +0,0 @@
Source: fuel-nailgun
Section: python
Priority: optional
Maintainer: Mirantis <product@mirantis.com>
Build-Depends: debhelper (>= 9),
dh-python,
dh-systemd,
openstack-pkg-tools,
python-all,
python-setuptools (>= 16.0),
python-pbr (>= 1.8),
python-yaml (>= 3.1.0),
git,
Standards-Version: 3.9.4
Homepage: https://launchpad.net/fuel
Package: fuel-nailgun
Architecture: all
Depends: fuel-openstack-metadata,
python-alembic (>= 0.8.4),
python-amqplib (>= 1.0.2),
python-anyjson (>= 0.3.3),
python-babel (>= 2.3.4),
python-crypto (>= 2.6.1),
python-decorator (>= 3.4.0),
python-distributed (>= 1.16.0),
python-fysom (>= 1.0.11),
python-iso8601 (>= 0.1.11),
python-jinja2 (>= 2.8),
python-jsonschema (>= 2.3.0),
python-keystoneclient (>= 1.7.0),
python-keystonemiddleware (>= 4.0.0),
python-kombu (>= 3.0.25),
python-mako (>= 0.9.1),
python-markupsafe (>= 0.18),
python-migrate (>= 0.9.6),
python-netaddr (>= 0.7.12),
python-netifaces (>= 0.10.4),
python-oslo-config (>= 1:1.2.1),
python-oslo-serialization (>= 1.0.0),
python-oslo-db (>= 1.0.0),
python-paste (>= 1.7.5.1),
python-ply (>= 3.4),
python-psycopg2 (>= 2.5.1),
python-requests (>= 2.10.0),
python-simplejson (>= 3.3.0),
python-six (>= 1.9.0),
python-sqlalchemy (>= 1.0.10),
python-stevedore (>= 1.10.0),
python-urllib3 (>= 1.15.1),
python-webpy (>= 0.37),
python-wsgilog (>= 0.3),
python-yaml (>= 3.10),
python-novaclient (>= 2.29.0),
python-networkx (>= 1.8),
python-cinderclient (>= 1.6.0),
python-pydot-ng (>= 1.0.0),
python-uwsgidecorators (>= 2.0.12),
python-yaql (>= 1.1.0),
python-tz (>= 2013.6),
${python:Depends},
${misc:Depends}
Description: fuel-web (nailgun) implements REST API and deployment data management.
It manages disk volumes configuration data, networks configuration data
and any other environment specific data which are necessary for successful deployment.
It has required orchestration logic to build instructions for provisioning
and deployment in a right order. Nailgun uses SQL database to store its data
and AMQP service to interact with workers.
Package: fuel-openstack-metadata
Architecture: all
Depends: ${misc:Depends}
Description: fuel-web (nailgun) implements REST API and deployment data management.
It manages disk volumes configuration data, networks configuration data
and any other environment specific data which are necessary for successful deployment.
It has required orchestration logic to build instructions for provisioning
and deployment in a right order. Nailgun uses SQL database to store its data
and AMQP service to interact with workers.
Package: fencing-agent
Architecture: all
Depends: ohai,
ruby-httpclient,
ruby-rethtool,
ruby-cstruct,
ruby-json,
${misc:Depends}
Description: Fencing agent

28
debian/copyright vendored
View File

@ -1,28 +0,0 @@
Format: http://www.debian.org/doc/packaging-manuals/copyright-format/1.0/
Upstream-Name: fuel-web
Source: https://github.com/openstack/fuel-web.git
Files: *
Copyright: (c) 2016, Mirantis, Inc.
License: Apache-2
Files: debian/*
Copyright: (c) 2016, Mirantis, Inc.
License: Apache-2
License: Apache-2
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
http://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
On Debian-based systems the full text of the Apache version 2.0 license
can be found in `/usr/share/common-licenses/Apache-2.0'.

1
debian/docs vendored
View File

@ -1 +0,0 @@
README.md

View File

@ -1,2 +0,0 @@
opt/nailgun/bin
etc/cron.d

View File

@ -1,2 +0,0 @@
bin/fencing-agent.rb opt/nailgun/bin
bin/fencing-agent.cron etc/cron.d

View File

@ -1,4 +0,0 @@
etc/nailgun
var/log/nailgun
usr/bin
usr/share

View File

@ -1,2 +0,0 @@
nailgun/nailgun/settings.yaml /etc/nailgun/
systemd/* /lib/systemd/system/

View File

@ -1,2 +0,0 @@
usr/share/fuel-openstack-metadata
etc

View File

@ -1,2 +0,0 @@
nailgun/nailgun/fixtures/openstack.yaml usr/share/fuel-openstack-metadata
fuel_openstack_version etc

View File

@ -1,12 +0,0 @@
#! /usr/bin/env python2
import sys
import yaml
if len(sys.argv) == 2:
openstack_yaml = open(sys.argv[1])
yaml = yaml.safe_load(openstack_yaml)
elems = filter(lambda r: r['fields'].get('name'), yaml)
print elems[0]['fields']['version']
else:
print """Usage: {} OPENSTACK_YAML""".format(sys.argv[0])

46
debian/rules vendored
View File

@ -1,46 +0,0 @@
#!/usr/bin/make -f
# -*- makefile -*-
DH_VERBOSE=1
PYTHONS:=$(shell pyversions -vr)
include /usr/share/openstack-pkg-tools/pkgos.make
#export OSLO_PACKAGE_VERSION=$(shell dpkg-parsechangelog | grep Version: | cut -d' ' -f2 | sed -e 's/^[[:digit:]]*://' -e 's/[-].*//' -e 's/~/.0/' | head -n 1)
%:
dh $@ --with python2,systemd
override_dh_auto_build:
dh_auto_build
python $(CURDIR)/debian/openstack-version nailgun/nailgun/fixtures/openstack.yaml > $(CURDIR)/fuel_openstack_version
override_dh_auto_install:
cd nailgun \
set -e ; for pyvers in $(PYTHONS); do \
python$$pyvers setup.py install --install-layout=deb \
--root $(CURDIR)/debian/fuel-nailgun; \
done
override_dh_clean:
rm -rf build
dh_clean -O--buildsystem=python_distutils
rm -f debian/nailgun-common.postinst
rm -f debian/*.service debian/*.init debian/*.upstart
override_dh_systemd_enable: gen-init-configurations
dh_systemd_enable --no-enable
override_dh_systemd_start: gen-init-configurations
dh_systemd_start --no-start
# Commands not to run
override_dh_installcatalogs:
override_dh_installemacsen override_dh_installifupdown:
override_dh_installinfo override_dh_installmenu override_dh_installmime:
override_dh_installmodules override_dh_installlogcheck:
override_dh_installpam override_dh_installppp override_dh_installudev override_dh_installwm:
override_dh_installxfonts override_dh_gconf override_dh_icons override_dh_perl override_dh_usrlocal:
override_dh_installcron override_dh_installdebconf:
override_dh_installlogrotate override_dh_installgsettings:

View File

@ -1 +0,0 @@
3.0 (quilt)

2
docs/.gitignore vendored
View File

@ -1,2 +0,0 @@
_build/
plantuml.jar

View File

@ -1,173 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
PLANTUML = plantuml.jar
PLANTUML_FROM_PKG = /usr/share/plantuml/plantuml.jar
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " pdf to make PDF using rst2pdf"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
-rm -rf $(BUILDDIR)/*
$(PLANTUML):
@if [ -f $(PLANTUML_FROM_PKG) ]; \
then \
echo "Have installed plantuml. Creating link $(PLANTUML) on $(PLANTUML_FROM_PKG)."; \
ln -sf $(PLANTUML_FROM_PKG) $(PLANTUML); \
else \
echo "Downloading plantuml.jar."; \
wget https://downloads.sourceforge.net/project/plantuml/plantuml.jar -O $(PLANTUML); \
fi
$(ACTION.TOUCH)
html: $(PLANTUML)
$(SPHINXBUILD) -b html -W $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/fuel.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/fuel.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/fuel"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/fuel"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
pdf:
$(SPHINXBUILD) -b pdf $(ALLSPHINXOPTS) $(BUILDDIR)/pdf
@echo
@echo "Build finished; the PDF file is in $(BUILDDIR)/pdf."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."

View File

@ -1,2 +0,0 @@
Development guide was moved from this directory to openstack/fuel-docs repository.
Only auto-generated API documentation left here.

View File

@ -1,4 +0,0 @@
<li class="dropdown">
<a href="{{ pathto(master_doc) }}" class="dropdown-toggle" data-toggle="dropdown">{{ _('Site') }} <b class="caret"></b></a>
<ul class="dropdown-menu globaltoc">{{ toctree(maxdepth=1) }}</ul>
</li>

View File

@ -1,134 +0,0 @@
{% extends "basic/layout.html" %}
{% set script_files = script_files + ['_static/bootstrap.js'] %}
{% set css_files = ['_static/bootstrap.css', '_static/bootstrap-sphinx.css'] + css_files %}
{# Sidebar: Rework into our Boostrap nav section. #}
{% macro navBar() %}
<div id="navbar" class="navbar navbar-fixed-top">
<div class="navbar-inner">
<div class="container-fluid">
<a class="brand" href="{{ pathto(master_doc) }}">{{ project|e }}</a>
<span class="navbar-text pull-left"><b>{{ version|e }}</b></span>
<ul class="nav">
<li class="divider-vertical"></li>
{% block sidebartoc %}
{% include "globaltoc.html" %}
{% include "localtoc.html" %}
{% endblock %}
{% block sidebarrel %}
{% include "relations.html" %}
{% endblock %}
{% block sidebarsourcelink %}
{% include "sourcelink.html" %}
{% endblock %}
</ul>
{% block sidebarsearch %}
{% include "searchbox.html" %}
{% endblock %}
</ul>
</div>
</div>
</div>
</div>
{% endmacro %}
{%- block extrahead %}
<script type="text/javascript">
(function () {
/**
* Patch TOC list.
*
* Will mutate the underlying span to have a correct ul for nav.
*
* @param $span: Span containing nested UL's to mutate.
* @param minLevel: Starting level for nested lists. (1: global, 2: local).
*/
var patchToc = function ($ul, minLevel) {
var findA;
// Find all a "internal" tags, traversing recursively.
findA = function ($elem, level) {
var level = level || 0,
$items = $elem.find("> li > a.internal, > ul, > li > ul");
// Iterate everything in order.
$items.each(function (index, item) {
var $item = $(item),
tag = item.tagName.toLowerCase(),
pad = 15 + ((level - minLevel) * 10);
if (tag === 'a' && level >= minLevel) {
// Add to existing padding.
$item.css('padding-left', pad + "px");
console.log(level, $item, 'padding-left', pad + "px");
} else if (tag === 'ul') {
// Recurse.
findA($item, level + 1);
}
});
};
console.log("HERE");
findA($ul);
};
$(document).ready(function () {
// Add styling, structure to TOC's.
$(".dropdown-menu").each(function () {
$(this).find("ul").each(function (index, item){
var $item = $(item);
$item.addClass('unstyled');
});
$(this).find("li").each(function () {
$(this).parent().append(this);
});
});
// Patch in level.
patchToc($("ul.globaltoc"), 2);
patchToc($("ul.localtoc"), 2);
// Enable dropdown.
$('.dropdown-toggle').dropdown();
});
}());
</script>
{% endblock %}
{% block header %}{{ navBar() }}{% endblock %}
{# Silence the sidebar's, relbar's #}
{% block sidebar1 %}{% endblock %}
{% block sidebar2 %}{% endblock %}
{% block relbar1 %}{% endblock %}
{% block relbar2 %}{% endblock %}
{%- block content %}
<div class="container">
{% block body %} {% endblock %}
</div>
{%- endblock %}
{%- block footer %}
<footer class="footer">
<div class="container">
<p class="pull-right"><a href="#">Back to top</a></p>
<p>
{%- if show_copyright %}
{%- if hasdoc('copyright') %}
{% trans path=pathto('copyright'), copyright=copyright|e %}&copy; <a href="{{ path }}">Copyright</a> {{ copyright }}.{% endtrans %}<br/>
{%- else %}
{% trans copyright=copyright|e %}&copy; Copyright {{ copyright }}.{% endtrans %}<br/>
{%- endif %}
{%- endif %}
{%- if last_updated %}
{% trans last_updated=last_updated|e %}Last updated on {{ last_updated }}.{% endtrans %}<br/>
{%- endif %}
{%- if show_sphinx %}
{% trans sphinx_version=sphinx_version|e %}Created using <a href="http://sphinx.pocoo.org/">Sphinx</a> {{ sphinx_version }}.{% endtrans %}<br/>
{%- endif %}
</p>
</div>
</footer>
{%- endblock %}

View File

@ -1,5 +0,0 @@
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">{{ _('Page') }} <b class="caret"></b></a>
<ul class="dropdown-menu localtoc">{{ toc }}</ul>
<!--<span class="localtoc">{{ toc }}</span>-->
</li>

View File

@ -1,8 +0,0 @@
{%- if prev %}
<li><a href="{{ prev.link|e }}"
title="{{ _('previous chapter') }}">{{ "&laquo;"|safe }} {{ prev.title }}</a></li>
{%- endif %}
{%- if next %}
<li><a href="{{ next.link|e }}"
title="{{ _('next chapter') }}">{{ next.title }} {{ "&raquo;"|safe }}</a></li>
{%- endif %}

View File

@ -1,7 +0,0 @@
{%- if pagename != "search" %}
<form class="navbar-search pull-right" style="margin-bottom:-3px;" action="{{ pathto('search') }}" method="get">
<input type="text" name="q" placeholder="Search" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
{%- endif %}

View File

@ -1,4 +0,0 @@
{%- if show_source and has_source and sourcename %}
<li><a href="{{ pathto('_sources/' + sourcename, true)|e }}"
rel="nofollow">{{ _('Source') }}</a></li>
{%- endif %}

File diff suppressed because one or more lines are too long

View File

@ -1,24 +0,0 @@
/*
* bootstrap-sphinx.css
* ~~~~~~~~~~~~~~~~~~~~
*
* Sphinx stylesheet -- Twitter Bootstrap theme.
*/
body {
padding-top: 52px;
}
.navbar .brand {
color: #FFF;
text-shadow: #777 2px 2px 3px;
}
{%- block sidebarlogo %}
{%- if logo %}
.navbar h3 a, .navbar .brand {
background: transparent url("{{ logo }}") no-repeat 22px 3px;
padding-left: 62px;
}
{%- endif %}
{%- endblock %}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 14 KiB

File diff suppressed because one or more lines are too long

View File

@ -1,5 +0,0 @@
# Twitter Bootstrap Theme
[theme]
inherit = basic
stylesheet = basic.css
pygments_style = tango

View File

@ -1,12 +0,0 @@
{%- if pagename != "search" %}
<h3>Quick search</h3>
<form class="navbar-search pull-right" style="margin-bottom:-3px;" action="{{ pathto('search') }}" method="get">
<input type="text" name="q" placeholder="Search" />
<input type="submit" value="Go" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
<p class="searchtip" style="font-size: 90%">
Enter search terms or a module, class or function name.
</p>
</form>
{%- endif %}

View File

@ -1,3 +0,0 @@
<h3>Downloadable PDF</h3>
<a href="http://docs.mirantis.com/fuel-dev/pdf/Fuel.pdf"
rel="nofollow">Fuel Development Documentation</a>

View File

@ -1,101 +0,0 @@
# Add any Sphinx extension module names here, as strings.
# They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions += ['sphinx.ext.inheritance_diagram', 'sphinxcontrib.blockdiag',
'sphinxcontrib.actdiag', 'sphinxcontrib.seqdiag',
'sphinxcontrib.nwdiag']
# The encoding of source files.
source_encoding = 'utf-8-sig'
#source_encoding = 'shift_jis'
# The language for content autogenerated by Sphinx.
language = 'en'
#language = 'ja'
# The theme to use for HTML and HTML Help pages.
#html_theme = 'default'
#html_theme = 'sphinxdoc'
#html_theme = 'scrolls'
#html_theme = 'agogo'
#html_theme = 'traditional'
#html_theme = 'nature'
#html_theme = 'haiku'
# If this is not the empty string, a 'Last updated on:' timestamp
# is inserted at every page bottom, using the given strftime() format.
# Default is '%b %d, %Y' (or a locale-dependent equivalent).
html_last_updated_fmt = '%Y/%m/%d'
# Enable Antialiasing
blockdiag_antialias = True
acttdiag_antialias = True
seqdiag_antialias = True
nwdiag_antialias = True
extensions += ['rst2pdf.pdfbuilder']
pdf_documents = [
(master_doc, project, project, copyright),
]
pdf_stylesheets = ['sphinx', 'kerning', 'a4']
pdf_language = "en_US"
# Mode for literal blocks wider than the frame. Can be
# overflow, shrink or truncate
pdf_fit_mode = "shrink"
# Section level that forces a break page.
# For example: 1 means top-level sections start in a new page
# 0 means disabled
#pdf_break_level = 0
# When a section starts in a new page, force it to be 'even', 'odd',
# or just use 'any'
pdf_breakside = 'any'
# Insert footnotes where they are defined instead of
# at the end.
pdf_inline_footnotes = False
# verbosity level. 0 1 or 2
pdf_verbosity = 0
# If false, no index is generated.
pdf_use_index = True
# If false, no modindex is generated.
pdf_use_modindex = True
# If false, no coverpage is generated.
pdf_use_coverpage = True
# Name of the cover page template to use
#pdf_cover_template = 'sphinxcover.tmpl'
# Documents to append as an appendix to all manuals.
#pdf_appendices = []
# Enable experimental feature to split table cells. Use it
# if you get "DelayedTable too big" errors
#pdf_splittables = False
# Set the default DPI for images
#pdf_default_dpi = 72
# Enable rst2pdf extension modules (default is only vectorpdf)
# you need vectorpdf if you want to use sphinx's graphviz support
#pdf_extensions = ['vectorpdf']
# Page template name for "regular" pages
#pdf_page_template = 'cutePage'
# Show Table Of Contents at the beginning?
pdf_use_toc = True
# How many levels deep should the table of contents be?
pdf_toc_depth = 3
# Add section number to section references
pdf_use_numbered_links = False
# Background images fitting mode
pdf_fit_background_mode = 'scale'

View File

@ -1,264 +0,0 @@
# -*- coding: utf-8 -*-
#
# fuel documentation build configuration file
#
# This file is execfile()d with the current directory set to its containing
# dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import os
import sys
sys.path.insert(0, os.path.join(os.path.abspath('.'), "..", "nailgun"))
if "SYSTEM_TESTS_PATH" in os.environ:
sys.path.append(os.environ.get("SYSTEM_TESTS_PATH"))
autodoc_default_flags = ['members', 'show-inheritance']
autodoc_member_order = 'bysource'
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'rst2pdf.pdfbuilder',
'sphinxcontrib.plantuml',
'nailgun.autoapidoc'
]
plantuml = ['java', '-jar', 'plantuml.jar']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Fuel'
copyright = u'2012-2014, Mirantis'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0'
# The full version, including alpha/beta/rc tags.
release = ''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
language = 'en'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output --------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'sphinxdoc'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ["_templates"]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
html_sidebars = {
'**': ['localtoc.html', 'relations.html', 'sourcelink.html', 'sidebarpdf.html', 'searchbox.html'],
}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'fuel-doc'
# -- Options for LaTeX output -------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
'papersize': 'a4paper',
# The font size ('10pt', '11pt' or '12pt').
'pointsize': '12pt',
# Additional stuff for the LaTeX preamble.
'preamble': '''
\setcounter{tocdepth}{3}
\usepackage{tocbibind}
\pagenumbering{arabic}
'''
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [
('index', 'fuel.tex', u'Fuel Documentation', u'Mike Scherbakov', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output -------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'fuel', u'Fuel Documentation', [u'Mike Scherbakov'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -----------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [(
'index', 'fuel', u'Fuel Documentation', u'Mike Scherbakov',
'fuel', 'OpenStack Installer', 'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# -- Additional Settings ------------------------------------------------------
execfile('./common_conf.py')

View File

@ -1,67 +0,0 @@
REST API Reference
==================
.. contents:: :local:
Releases API
------------
.. automodule:: nailgun.api.v1.handlers.release
:inherited-members:
Clusters API
------------
.. automodule:: nailgun.api.v1.handlers.cluster
:inherited-members:
Nodes API
---------
.. automodule:: nailgun.api.v1.handlers.node
:inherited-members:
Disks API
---------
.. automodule:: nailgun.extensions.volume_manager.handlers.disks
:inherited-members:
Network Configuration API
-------------------------
.. automodule:: nailgun.extensions.network_manager.handlers.network_configuration
:inherited-members:
Notifications API
-----------------
.. automodule:: nailgun.api.v1.handlers.notifications
:inherited-members:
Tasks API
-----------------
.. automodule:: nailgun.api.v1.handlers.tasks
:inherited-members:
Logs API
-----------------
.. automodule:: nailgun.api.v1.handlers.logs
:inherited-members:
Version API
-----------------
.. automodule:: nailgun.api.v1.handlers.version
:inherited-members:

View File

@ -1,29 +0,0 @@
.. _objects-reference:
Objects Reference
=================
.. contents:: :local:
Base Objects
------------
.. automodule:: nailgun.objects.base
Release-related Objects
-----------------------
.. automodule:: nailgun.objects.release
Cluster-related Objects
-----------------------
.. automodule:: nailgun.objects.cluster
Node-related Objects
--------------------
.. automodule:: nailgun.objects.node

View File

@ -1,10 +0,0 @@
.. _contents:
Table of contents
=================
.. toctree::
development/api_doc
development/objects

View File

@ -1,199 +0,0 @@
@ECHO OFF
REM Command file for Sphinx documentation
if "%SPHINXBUILD%" == "" (
set SPHINXBUILD=sphinx-build
)
set BUILDDIR=_build
set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .
set I18NSPHINXOPTS=%SPHINXOPTS% .
if NOT "%PAPER%" == "" (
set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%
)
if "%1" == "" goto help
if "%1" == "help" (
:help
echo.Please use `make ^<target^>` where ^<target^> is one of
echo. html to make standalone HTML files
echo. dirhtml to make HTML files named index.html in directories
echo. singlehtml to make a single large HTML file
echo. pickle to make pickle files
echo. json to make JSON files
echo. htmlhelp to make HTML files and a HTML help project
echo. qthelp to make HTML files and a qthelp project
echo. devhelp to make HTML files and a Devhelp project
echo. epub to make an epub
echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
echo. pdf to make PDF files
echo. text to make text files
echo. man to make manual pages
echo. texinfo to make Texinfo files
echo. gettext to make PO message catalogs
echo. changes to make an overview over all changed/added/deprecated items
echo. linkcheck to check all external links for integrity
echo. doctest to run all doctests embedded in the documentation if enabled
goto end
)
if "%1" == "clean" (
for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i
del /q /s %BUILDDIR%\*
goto end
)
if "%1" == "html" (
%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/html.
goto end
)
if "%1" == "dirhtml" (
%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.
goto end
)
if "%1" == "singlehtml" (
%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.
goto end
)
if "%1" == "pickle" (
%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the pickle files.
goto end
)
if "%1" == "json" (
%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can process the JSON files.
goto end
)
if "%1" == "htmlhelp" (
%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run HTML Help Workshop with the ^
.hhp project file in %BUILDDIR%/htmlhelp.
goto end
)
if "%1" == "qthelp" (
%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished; now you can run "qcollectiongenerator" with the ^
.qhcp project file in %BUILDDIR%/qthelp, like this:
echo.^> qcollectiongenerator %BUILDDIR%\qthelp\fuel.qhcp
echo.To view the help file:
echo.^> assistant -collectionFile %BUILDDIR%\qthelp\fuel.ghc
goto end
)
if "%1" == "devhelp" (
%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp
if errorlevel 1 exit /b 1
echo.
echo.Build finished.
goto end
)
if "%1" == "epub" (
%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The epub file is in %BUILDDIR%/epub.
goto end
)
if "%1" == "latex" (
%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.
goto end
)
if "%1" == "pdf" (
%SPHINXBUILD% -bpdf %ALLSPHINXOPTS% %BUILDDIR%/pdf
if errorlevel 1 exit /b 1
echo.
echo.Build finished; the PDF files are in %BUILDDIR%/pdf.
goto end
)
if "%1" == "text" (
%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The text files are in %BUILDDIR%/text.
goto end
)
if "%1" == "man" (
%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The manual pages are in %BUILDDIR%/man.
goto end
)
if "%1" == "texinfo" (
%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.
goto end
)
if "%1" == "gettext" (
%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale
if errorlevel 1 exit /b 1
echo.
echo.Build finished. The message catalogs are in %BUILDDIR%/locale.
goto end
)
if "%1" == "changes" (
%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes
if errorlevel 1 exit /b 1
echo.
echo.The overview file is in %BUILDDIR%/changes.
goto end
)
if "%1" == "linkcheck" (
%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck
if errorlevel 1 exit /b 1
echo.
echo.Link check complete; look for any errors in the above output ^
or in %BUILDDIR%/linkcheck/output.txt.
goto end
)
if "%1" == "doctest" (
%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest
if errorlevel 1 exit /b 1
echo.
echo.Testing of doctests in the sources finished, look at the ^
results in %BUILDDIR%/doctest/output.txt.
goto end
)
:end

View File

@ -1,5 +0,0 @@
include manage.py
include fuel-cli/fuel
recursive-include nailgun *
recursive-include static *
include *requirements.txt

View File

@ -1,69 +0,0 @@
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import re
from psycopg2 import connect
from nailgun.settings import settings
def pytest_addoption(parser):
parser.addoption("--dbname", default=settings.DATABASE['name'],
help="Overwrite database name")
parser.addoption("--cleandb", default=False, action="store_true",
help="Provide this flag to dropdb/syncdb for all slaves")
def pytest_configure(config):
db_name = config.getoption('dbname')
if hasattr(config, 'slaveinput'):
#slaveid have next format gw1
#it is internal pytest thing, and we dont want to use it
uid = re.search(r'\d+', config.slaveinput['slaveid']).group(0)
db_name = '{0}{1}'.format(db_name, uid)
connection = connect(
dbname='postgres', user=settings.DATABASE['user'],
host=settings.DATABASE['host'],
password=settings.DATABASE['passwd'])
cursor = connection.cursor()
if not_present(cursor, db_name):
create_database(connection, cursor, db_name)
settings.DATABASE['name'] = db_name
cleandb = config.getoption('cleandb')
if cleandb:
from nailgun.db import dropdb, syncdb
dropdb()
syncdb()
def pytest_unconfigure(config):
cleandb = config.getoption('cleandb')
if cleandb:
from nailgun.db import dropdb
dropdb()
def create_database(connection, cursor, name):
connection.set_isolation_level(0)
cursor.execute('create database {0}'.format(name))
connection.set_isolation_level(1)
cursor.close()
connection.close()
def not_present(cur, name):
cur.execute('select datname from pg_database;')
db_list = cur.fetchall()
return all([name not in row for row in db_list])

View File

@ -1,2 +0,0 @@
You should copy file fake-target-mcollective.log to /var/tmp for emulate node logs when FAKE-TASKS enabled.

View File

@ -1,309 +0,0 @@
2013-01-16T12:26:36 info: # Logfile created on Wed Jan 16 12:26:36 +0000 2013 by logger.rb/1.2.6
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.492877 #834] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading Mcollective::Facts::Yaml_facts from mcollective/facts/yaml_facts.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.503049 #834] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin facts_plugin with class MCollective::Facts::Yaml_facts single_instance: true
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.503142 #834] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading Mcollective::Connector::Stomp from mcollective/connector/stomp.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.503832 #834] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin connector_plugin with class MCollective::Connector::Stomp single_instance: true
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.503940 #834] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading Mcollective::Security::Psk from mcollective/security/psk.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.504627 #834] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin security_plugin with class MCollective::Security::Psk single_instance: true
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.504751 #834] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading Mcollective::Registration::Agentlist from mcollective/registration/agentlist.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.505116 #834] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin registration_plugin with class MCollective::Registration::Agentlist single_instance: true
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.505348 #834] DEBUG -- : pluginmanager.rb:47:in `<<' Registering plugin global_stats with class MCollective::RunnerStats single_instance: true
2013-01-16T12:26:36 info: I, [2013-01-16T12:26:36.505395 #834] INFO -- : mcollectived:31 The Marionette Collective 2.2.1 started logging at debug level
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.505433 #834] DEBUG -- : mcollectived:34 Starting in the background (true)
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.512411 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin global_stats with class MCollective::RunnerStats
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.512524 #838] DEBUG -- : pluginmanager.rb:80:in `[]' Returning new plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.512598 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin global_stats with class MCollective::RunnerStats
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.512662 #838] DEBUG -- : pluginmanager.rb:80:in `[]' Returning new plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.512756 #838] DEBUG -- : stomp.rb:150:in `connect' Connecting to 10.20.0.2:61613
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.518718 #838] DEBUG -- : agents.rb:26:in `loadagents' Reloading all agents from disk
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.518854 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found erase_node at /usr/libexec/mcollective/mcollective/agent/erase_node.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.518923 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Erase_node from mcollective/agent/erase_node.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.539956 #838] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for erase_node
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.540066 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin erase_node_agent with class MCollective::Agent::Erase_node single_instance: false
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.540121 #838] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin erase_node_agent with class MCollective::Agent::Erase_node
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.540416 #838] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/erase_node'
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.541162 #838] DEBUG -- : base.rb:93:in `findddlfile' Found erase_node ddl at /usr/libexec/mcollective/mcollective/agent/erase_node.ddl
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.541332 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.541417 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.erase_node.command
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.541706 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found discovery at /usr/libexec/mcollective/mcollective/agent/discovery.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.541796 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Discovery from mcollective/agent/discovery.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545131 #838] DEBUG -- : agents.rb:91:in `activate_agent?' MCollective::Agent::Discovery does not have an activate? method, activating as default
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545221 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin discovery_agent with class MCollective::Agent::Discovery single_instance: true
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545274 #838] DEBUG -- : pluginmanager.rb:80:in `[]' Returning new plugin discovery_agent with class MCollective::Agent::Discovery
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545353 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545422 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.discovery.command
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545540 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found systemtype at /usr/libexec/mcollective/mcollective/agent/systemtype.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545602 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Systemtype from mcollective/agent/systemtype.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545788 #838] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for systemtype
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545846 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin systemtype_agent with class MCollective::Agent::Systemtype single_instance: false
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545894 #838] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin systemtype_agent with class MCollective::Agent::Systemtype
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.545998 #838] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/systemtype'
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.546101 #838] DEBUG -- : base.rb:93:in `findddlfile' Found systemtype ddl at /usr/libexec/mcollective/mcollective/agent/systemtype.ddl
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.546248 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.546315 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.systemtype.command
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.546400 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found net_probe at /usr/libexec/mcollective/mcollective/agent/net_probe.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.546456 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Net_probe from mcollective/agent/net_probe.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.546846 #838] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for net_probe
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.546903 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin net_probe_agent with class MCollective::Agent::Net_probe single_instance: false
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.546958 #838] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin net_probe_agent with class MCollective::Agent::Net_probe
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.547054 #838] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/net_probe'
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.547148 #838] DEBUG -- : base.rb:93:in `findddlfile' Found net_probe ddl at /usr/libexec/mcollective/mcollective/agent/net_probe.ddl
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.547339 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.547402 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.net_probe.command
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.547492 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found rpuppet at /usr/libexec/mcollective/mcollective/agent/rpuppet.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.547549 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Rpuppet from mcollective/agent/rpuppet.rb
2013-01-16T12:26:36 err: E, [2013-01-16T12:26:36.551471 #838] ERROR -- : pluginmanager.rb:171:in `loadclass' Failed to load MCollective::Agent::Rpuppet: no such file to load -- puppet/util/command_line
2013-01-16T12:26:36 err: E, [2013-01-16T12:26:36.551553 #838] ERROR -- : agents.rb:71:in `loadagent' Loading agent rpuppet failed: no such file to load -- puppet/util/command_line
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.551626 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found rapply at /usr/libexec/mcollective/mcollective/agent/rapply.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.551688 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Rapply from mcollective/agent/rapply.rb
2013-01-16T12:26:36 err: E, [2013-01-16T12:26:36.555693 #838] ERROR -- : pluginmanager.rb:171:in `loadclass' Failed to load MCollective::Agent::Rapply: no such file to load -- puppet/application
2013-01-16T12:26:36 err: E, [2013-01-16T12:26:36.555757 #838] ERROR -- : agents.rb:71:in `loadagent' Loading agent rapply failed: no such file to load -- puppet/application
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.555823 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found puppetd at /usr/libexec/mcollective/mcollective/agent/puppetd.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.555882 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Puppetd from mcollective/agent/puppetd.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.556361 #838] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for puppetd
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.556425 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin puppetd_agent with class MCollective::Agent::Puppetd single_instance: false
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.556474 #838] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin puppetd_agent with class MCollective::Agent::Puppetd
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.556562 #838] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/puppetd'
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.556661 #838] DEBUG -- : base.rb:93:in `findddlfile' Found puppetd ddl at /usr/libexec/mcollective/mcollective/agent/puppetd.ddl
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.556942 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.557042 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.puppetd.command
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.557142 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found rpcutil at /usr/libexec/mcollective/mcollective/agent/rpcutil.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.557197 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Rpcutil from mcollective/agent/rpcutil.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.557546 #838] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for rpcutil
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.557603 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin rpcutil_agent with class MCollective::Agent::Rpcutil single_instance: false
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.557650 #838] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin rpcutil_agent with class MCollective::Agent::Rpcutil
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.557725 #838] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/rpcutil'
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.557815 #838] DEBUG -- : base.rb:93:in `findddlfile' Found rpcutil ddl at /usr/libexec/mcollective/mcollective/agent/rpcutil.ddl
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.558307 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.558369 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.rpcutil.command
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.558456 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found nailyfact at /usr/libexec/mcollective/mcollective/agent/nailyfact.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.558512 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Nailyfact from mcollective/agent/nailyfact.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.558816 #838] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for nailyfact
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.558872 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin nailyfact_agent with class MCollective::Agent::Nailyfact single_instance: false
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.558919 #838] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin nailyfact_agent with class MCollective::Agent::Nailyfact
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.559015 #838] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/nailyfact'
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.559106 #838] DEBUG -- : base.rb:93:in `findddlfile' Found nailyfact ddl at /usr/libexec/mcollective/mcollective/agent/nailyfact.ddl
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.559285 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.559348 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.nailyfact.command
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.559435 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found node_indirector at /usr/libexec/mcollective/mcollective/agent/node_indirector.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.559490 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Node_indirector from mcollective/agent/node_indirector.rb
2013-01-16T12:26:36 err: E, [2013-01-16T12:26:36.564361 #838] ERROR -- : pluginmanager.rb:171:in `loadclass' Failed to load MCollective::Agent::Node_indirector: no such file to load -- puppet/node
2013-01-16T12:26:36 err: E, [2013-01-16T12:26:36.564424 #838] ERROR -- : agents.rb:71:in `loadagent' Loading agent node_indirector failed: no such file to load -- puppet/node
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.564490 #838] DEBUG -- : agents.rb:104:in `findagentfile' Found fake at /usr/libexec/mcollective/mcollective/agent/fake.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.564549 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Fake from mcollective/agent/fake.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.564715 #838] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for fake
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.564775 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin fake_agent with class MCollective::Agent::Fake single_instance: false
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.564824 #838] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin fake_agent with class MCollective::Agent::Fake
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.564907 #838] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/fake'
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.565043 #838] DEBUG -- : base.rb:93:in `findddlfile' Found fake ddl at /usr/libexec/mcollective/mcollective/agent/fake.ddl
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.565181 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.565245 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.fake.command
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.565666 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Data::Agent_data from mcollective/data/agent_data.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.566017 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin agent_data with class MCollective::Data::Agent_data single_instance: false
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.566082 #838] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Data::Fstat_data from mcollective/data/fstat_data.rb
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.566294 #838] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin fstat_data with class MCollective::Data::Fstat_data single_instance: false
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.566365 #838] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin fstat_data with class MCollective::Data::Fstat_data
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.566440 #838] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'data/fstat_data'
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.566660 #838] DEBUG -- : base.rb:93:in `findddlfile' Found fstat_data ddl at /usr/libexec/mcollective/mcollective/data/fstat_data.ddl
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.567041 #838] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin agent_data with class MCollective::Data::Agent_data
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.567117 #838] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'data/agent_data'
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.567201 #838] DEBUG -- : base.rb:93:in `findddlfile' Found agent_data ddl at /usr/libexec/mcollective/mcollective/data/agent_data.ddl
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.567366 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.567427 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.mcollective.command
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.567507 #838] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.567603 #838] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /queue/mcollective.ca4c50b905dc21ea17a10549a6f5944f
2013-01-16T12:26:36 debug: D, [2013-01-16T12:26:36.567676 #838] DEBUG -- : stomp.rb:197:in `receive' Waiting for a message from Stomp
2013-01-16T12:26:37 warning: W, [2013-01-16T12:26:37.452256 #838] WARN -- : runner.rb:60:in `run' Exiting after signal: SIGTERM
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.452333 #838] DEBUG -- : stomp.rb:270:in `disconnect' Disconnecting from Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.609291 #958] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading Mcollective::Facts::Yaml_facts from mcollective/facts/yaml_facts.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.619522 #958] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin facts_plugin with class MCollective::Facts::Yaml_facts single_instance: true
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.619617 #958] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading Mcollective::Connector::Stomp from mcollective/connector/stomp.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.620322 #958] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin connector_plugin with class MCollective::Connector::Stomp single_instance: true
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.620433 #958] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading Mcollective::Security::Psk from mcollective/security/psk.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.621145 #958] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin security_plugin with class MCollective::Security::Psk single_instance: true
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.621264 #958] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading Mcollective::Registration::Agentlist from mcollective/registration/agentlist.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.621654 #958] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin registration_plugin with class MCollective::Registration::Agentlist single_instance: true
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.621894 #958] DEBUG -- : pluginmanager.rb:47:in `<<' Registering plugin global_stats with class MCollective::RunnerStats single_instance: true
2013-01-16T12:26:37 info: I, [2013-01-16T12:26:37.621939 #958] INFO -- : mcollectived:31 The Marionette Collective 2.2.1 started logging at debug level
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.621974 #958] DEBUG -- : mcollectived:34 Starting in the background (true)
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.627841 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin global_stats with class MCollective::RunnerStats
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.627954 #962] DEBUG -- : pluginmanager.rb:80:in `[]' Returning new plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.628056 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin global_stats with class MCollective::RunnerStats
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.628129 #962] DEBUG -- : pluginmanager.rb:80:in `[]' Returning new plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.628224 #962] DEBUG -- : stomp.rb:150:in `connect' Connecting to 10.20.0.2:61613
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.636559 #962] DEBUG -- : agents.rb:26:in `loadagents' Reloading all agents from disk
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.636706 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found erase_node at /usr/libexec/mcollective/mcollective/agent/erase_node.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.636773 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Erase_node from mcollective/agent/erase_node.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.652064 #962] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for erase_node
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.652159 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin erase_node_agent with class MCollective::Agent::Erase_node single_instance: false
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.652207 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin erase_node_agent with class MCollective::Agent::Erase_node
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.652505 #962] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/erase_node'
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.653250 #962] DEBUG -- : base.rb:93:in `findddlfile' Found erase_node ddl at /usr/libexec/mcollective/mcollective/agent/erase_node.ddl
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.653417 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.653487 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.erase_node.command
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.653736 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found discovery at /usr/libexec/mcollective/mcollective/agent/discovery.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.653795 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Discovery from mcollective/agent/discovery.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.653972 #962] DEBUG -- : agents.rb:91:in `activate_agent?' MCollective::Agent::Discovery does not have an activate? method, activating as default
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.656743 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin discovery_agent with class MCollective::Agent::Discovery single_instance: true
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.656800 #962] DEBUG -- : pluginmanager.rb:80:in `[]' Returning new plugin discovery_agent with class MCollective::Agent::Discovery
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.656877 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.656941 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.discovery.command
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657062 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found systemtype at /usr/libexec/mcollective/mcollective/agent/systemtype.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657124 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Systemtype from mcollective/agent/systemtype.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657296 #962] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for systemtype
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657353 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin systemtype_agent with class MCollective::Agent::Systemtype single_instance: false
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657400 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin systemtype_agent with class MCollective::Agent::Systemtype
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657499 #962] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/systemtype'
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657601 #962] DEBUG -- : base.rb:93:in `findddlfile' Found systemtype ddl at /usr/libexec/mcollective/mcollective/agent/systemtype.ddl
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657741 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657801 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.systemtype.command
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657900 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found net_probe at /usr/libexec/mcollective/mcollective/agent/net_probe.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.657955 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Net_probe from mcollective/agent/net_probe.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.658388 #962] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for net_probe
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.658447 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin net_probe_agent with class MCollective::Agent::Net_probe single_instance: false
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.658496 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin net_probe_agent with class MCollective::Agent::Net_probe
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.658572 #962] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/net_probe'
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.658664 #962] DEBUG -- : base.rb:93:in `findddlfile' Found net_probe ddl at /usr/libexec/mcollective/mcollective/agent/net_probe.ddl
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.658853 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.658919 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.net_probe.command
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.664250 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found rpuppet at /usr/libexec/mcollective/mcollective/agent/rpuppet.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.664346 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Rpuppet from mcollective/agent/rpuppet.rb
2013-01-16T12:26:37 err: E, [2013-01-16T12:26:37.665259 #962] ERROR -- : pluginmanager.rb:171:in `loadclass' Failed to load MCollective::Agent::Rpuppet: no such file to load -- puppet/util/command_line
2013-01-16T12:26:37 err: E, [2013-01-16T12:26:37.665313 #962] ERROR -- : agents.rb:71:in `loadagent' Loading agent rpuppet failed: no such file to load -- puppet/util/command_line
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.665388 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found rapply at /usr/libexec/mcollective/mcollective/agent/rapply.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.665444 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Rapply from mcollective/agent/rapply.rb
2013-01-16T12:26:37 err: E, [2013-01-16T12:26:37.668623 #962] ERROR -- : pluginmanager.rb:171:in `loadclass' Failed to load MCollective::Agent::Rapply: no such file to load -- puppet/application
2013-01-16T12:26:37 err: E, [2013-01-16T12:26:37.668684 #962] ERROR -- : agents.rb:71:in `loadagent' Loading agent rapply failed: no such file to load -- puppet/application
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.668751 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found puppetd at /usr/libexec/mcollective/mcollective/agent/puppetd.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.668811 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Puppetd from mcollective/agent/puppetd.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.669313 #962] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for puppetd
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.669376 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin puppetd_agent with class MCollective::Agent::Puppetd single_instance: false
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.669426 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin puppetd_agent with class MCollective::Agent::Puppetd
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.669547 #962] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/puppetd'
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.669647 #962] DEBUG -- : base.rb:93:in `findddlfile' Found puppetd ddl at /usr/libexec/mcollective/mcollective/agent/puppetd.ddl
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.669926 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.670016 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.puppetd.command
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.670124 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found rpcutil at /usr/libexec/mcollective/mcollective/agent/rpcutil.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.670181 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Rpcutil from mcollective/agent/rpcutil.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.670528 #962] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for rpcutil
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.670583 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin rpcutil_agent with class MCollective::Agent::Rpcutil single_instance: false
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.670630 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin rpcutil_agent with class MCollective::Agent::Rpcutil
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.670703 #962] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/rpcutil'
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.670792 #962] DEBUG -- : base.rb:93:in `findddlfile' Found rpcutil ddl at /usr/libexec/mcollective/mcollective/agent/rpcutil.ddl
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.674272 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.674374 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.rpcutil.command
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.674490 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found nailyfact at /usr/libexec/mcollective/mcollective/agent/nailyfact.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.674556 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Nailyfact from mcollective/agent/nailyfact.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.674931 #962] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for nailyfact
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.675019 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin nailyfact_agent with class MCollective::Agent::Nailyfact single_instance: false
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.675072 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin nailyfact_agent with class MCollective::Agent::Nailyfact
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.675164 #962] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/nailyfact'
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.675272 #962] DEBUG -- : base.rb:93:in `findddlfile' Found nailyfact ddl at /usr/libexec/mcollective/mcollective/agent/nailyfact.ddl
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.675488 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.675556 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.nailyfact.command
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.675656 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found node_indirector at /usr/libexec/mcollective/mcollective/agent/node_indirector.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.675718 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Node_indirector from mcollective/agent/node_indirector.rb
2013-01-16T12:26:37 err: E, [2013-01-16T12:26:37.676526 #962] ERROR -- : pluginmanager.rb:171:in `loadclass' Failed to load MCollective::Agent::Node_indirector: no such file to load -- puppet/node
2013-01-16T12:26:37 err: E, [2013-01-16T12:26:37.676585 #962] ERROR -- : agents.rb:71:in `loadagent' Loading agent node_indirector failed: no such file to load -- puppet/node
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.676652 #962] DEBUG -- : agents.rb:104:in `findagentfile' Found fake at /usr/libexec/mcollective/mcollective/agent/fake.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.676711 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Agent::Fake from mcollective/agent/fake.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.676898 #962] DEBUG -- : agent.rb:138:in `activate?' Starting default activation checks for fake
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.676960 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin fake_agent with class MCollective::Agent::Fake single_instance: false
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.680570 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin fake_agent with class MCollective::Agent::Fake
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.680716 #962] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'agent/fake'
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.680834 #962] DEBUG -- : base.rb:93:in `findddlfile' Found fake ddl at /usr/libexec/mcollective/mcollective/agent/fake.ddl
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.681027 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.681100 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.fake.command
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.681567 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Data::Agent_data from mcollective/data/agent_data.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.681902 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin agent_data with class MCollective::Data::Agent_data single_instance: false
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.681965 #962] DEBUG -- : pluginmanager.rb:167:in `loadclass' Loading MCollective::Data::Fstat_data from mcollective/data/fstat_data.rb
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.682202 #962] DEBUG -- : pluginmanager.rb:44:in `<<' Registering plugin fstat_data with class MCollective::Data::Fstat_data single_instance: false
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.682275 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin fstat_data with class MCollective::Data::Fstat_data
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.682349 #962] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'data/fstat_data'
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.682577 #962] DEBUG -- : base.rb:93:in `findddlfile' Found fstat_data ddl at /usr/libexec/mcollective/mcollective/data/fstat_data.ddl
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.682933 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin agent_data with class MCollective::Data::Agent_data
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.685273 #962] DEBUG -- : cache.rb:117:in `ttl' Cache miss on 'ddl' key 'data/agent_data'
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.685379 #962] DEBUG -- : base.rb:93:in `findddlfile' Found agent_data ddl at /usr/libexec/mcollective/mcollective/data/agent_data.ddl
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.685570 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.685631 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /topic/mcollective.mcollective.command
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.685714 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.685809 #962] DEBUG -- : stomp.rb:241:in `subscribe' Subscribing to /queue/mcollective.c4ca4238a0b923820dcc509a6f75849b
2013-01-16T12:26:37 debug: D, [2013-01-16T12:26:37.685883 #962] DEBUG -- : stomp.rb:197:in `receive' Waiting for a message from Stomp
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222263 #962] DEBUG -- : runnerstats.rb:49:in `received' Incrementing total stat
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222338 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222419 #962] DEBUG -- : runnerstats.rb:38:in `validated' Incrementing validated stat
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222469 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222546 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222626 #962] DEBUG -- : base.rb:117:in `validate_filter?' Passing based on agent rpcutil
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222675 #962] DEBUG -- : base.rb:117:in `validate_filter?' Passing based on agent rpcutil
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222723 #962] DEBUG -- : base.rb:153:in `validate_filter?' Message passed the filter checks
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222764 #962] DEBUG -- : runnerstats.rb:26:in `passed' Incrementing passed stat
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222806 #962] DEBUG -- : runner.rb:80:in `agentmsg' Handling message for agent 'discovery' on collective 'mcollective'
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222853 #962] DEBUG -- : agents.rb:119:in `dispatch' Dispatching a message to agent discovery
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.222943 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin discovery_agent with class MCollective::Agent::Discovery
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.223018 #962] DEBUG -- : stomp.rb:197:in `receive' Waiting for a message from Stomp
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.223190 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.223394 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.223463 #962] DEBUG -- : base.rb:168:in `create_reply' Encoded a message for request c4252a6973535a958349627872a4f6b2
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.223580 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.223646 #962] DEBUG -- : stomp.rb:230:in `publish' Sending a broadcast message to STOMP target '/topic/mcollective.discovery.reply'
2013-01-16T12:29:47 debug: D, [2013-01-16T12:29:47.223902 #962] DEBUG -- : runnerstats.rb:56:in `sent' Incrementing replies stat
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207332 #962] DEBUG -- : runnerstats.rb:49:in `received' Incrementing total stat
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207419 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207502 #962] DEBUG -- : runnerstats.rb:38:in `validated' Incrementing validated stat
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207558 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207625 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207697 #962] DEBUG -- : base.rb:117:in `validate_filter?' Passing based on agent rpcutil
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207747 #962] DEBUG -- : base.rb:117:in `validate_filter?' Passing based on agent rpcutil
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207796 #962] DEBUG -- : base.rb:153:in `validate_filter?' Message passed the filter checks
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207839 #962] DEBUG -- : runnerstats.rb:26:in `passed' Incrementing passed stat
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207883 #962] DEBUG -- : runner.rb:80:in `agentmsg' Handling message for agent 'rpcutil' on collective 'mcollective'
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.207926 #962] DEBUG -- : agents.rb:119:in `dispatch' Dispatching a message to agent rpcutil
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.208034 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin rpcutil_agent with class MCollective::Agent::Rpcutil
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.208099 #962] DEBUG -- : stomp.rb:197:in `receive' Waiting for a message from Stomp
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.208265 #962] DEBUG -- : cache.rb:105:in `read' Cache hit on 'ddl' key 'agent/rpcutil'
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.209068 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.209135 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.209211 #962] DEBUG -- : base.rb:168:in `create_reply' Encoded a message for request 865fb61cf4bc5dd08b2152f494c1d9f8
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.209334 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.209411 #962] DEBUG -- : stomp.rb:230:in `publish' Sending a broadcast message to STOMP target '/topic/mcollective.rpcutil.reply'
2013-01-16T12:29:49 debug: D, [2013-01-16T12:29:49.209685 #962] DEBUG -- : runnerstats.rb:56:in `sent' Incrementing replies stat
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.236481 #962] DEBUG -- : runnerstats.rb:49:in `received' Incrementing total stat
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.236566 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.236644 #962] DEBUG -- : runnerstats.rb:38:in `validated' Incrementing validated stat
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.236698 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.236766 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.236835 #962] DEBUG -- : base.rb:117:in `validate_filter?' Passing based on agent rpcutil
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.236883 #962] DEBUG -- : base.rb:117:in `validate_filter?' Passing based on agent rpcutil
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.236932 #962] DEBUG -- : base.rb:153:in `validate_filter?' Message passed the filter checks
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.236973 #962] DEBUG -- : runnerstats.rb:26:in `passed' Incrementing passed stat
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.237040 #962] DEBUG -- : runner.rb:80:in `agentmsg' Handling message for agent 'discovery' on collective 'mcollective'
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.237083 #962] DEBUG -- : agents.rb:119:in `dispatch' Dispatching a message to agent discovery
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.237175 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin discovery_agent with class MCollective::Agent::Discovery
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.237232 #962] DEBUG -- : stomp.rb:197:in `receive' Waiting for a message from Stomp
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.237398 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.237608 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.237676 #962] DEBUG -- : base.rb:168:in `create_reply' Encoded a message for request 8d7776d9c45b5f56b87570fce2fbcc27
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.237829 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.237896 #962] DEBUG -- : stomp.rb:230:in `publish' Sending a broadcast message to STOMP target '/topic/mcollective.discovery.reply'
2013-01-16T12:29:58 debug: D, [2013-01-16T12:29:58.238151 #962] DEBUG -- : runnerstats.rb:56:in `sent' Incrementing replies stat
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224258 #962] DEBUG -- : runnerstats.rb:49:in `received' Incrementing total stat
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224345 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224423 #962] DEBUG -- : runnerstats.rb:38:in `validated' Incrementing validated stat
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224478 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224544 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224614 #962] DEBUG -- : base.rb:117:in `validate_filter?' Passing based on agent rpcutil
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224663 #962] DEBUG -- : base.rb:117:in `validate_filter?' Passing based on agent rpcutil
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224711 #962] DEBUG -- : base.rb:153:in `validate_filter?' Message passed the filter checks
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224753 #962] DEBUG -- : runnerstats.rb:26:in `passed' Incrementing passed stat
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224802 #962] DEBUG -- : runner.rb:80:in `agentmsg' Handling message for agent 'rpcutil' on collective 'mcollective'
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224844 #962] DEBUG -- : agents.rb:119:in `dispatch' Dispatching a message to agent rpcutil
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.224933 #962] DEBUG -- : pluginmanager.rb:88:in `[]' Returning new plugin rpcutil_agent with class MCollective::Agent::Rpcutil
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.225004 #962] DEBUG -- : stomp.rb:197:in `receive' Waiting for a message from Stomp
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.225162 #962] DEBUG -- : cache.rb:105:in `read' Cache hit on 'ddl' key 'agent/rpcutil'
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.225500 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.225567 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin security_plugin with class MCollective::Security::Psk
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.225644 #962] DEBUG -- : base.rb:168:in `create_reply' Encoded a message for request abe36a613826518fb19cda0e2226dd6a
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.225807 #962] DEBUG -- : pluginmanager.rb:83:in `[]' Returning cached plugin connector_plugin with class MCollective::Connector::Stomp
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.225880 #962] DEBUG -- : stomp.rb:230:in `publish' Sending a broadcast message to STOMP target '/topic/mcollective.rpcutil.reply'
2013-01-16T12:30:00 debug: D, [2013-01-16T12:30:00.226246 #962] DEBUG -- : runnerstats.rb:56:in `sent' Incrementing replies stat

View File

@ -1,398 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import __main__
import argparse
import code
import os
import sys
def add_config_parameter(parser):
parser.add_argument(
'-c', '--config', dest='config_file', action='store', type=str,
help='custom config file', default=None
)
def load_run_parsers(subparsers):
run_parser = subparsers.add_parser(
'run', help='run application locally'
)
run_parser.add_argument(
'-p', '--port', dest='port', action='store', type=str,
help='application port', default='8000'
)
run_parser.add_argument(
'-a', '--address', dest='address', action='store', type=str,
help='application address', default='0.0.0.0'
)
run_parser.add_argument(
'--fake-tasks', action='store_true', help='fake tasks'
)
run_parser.add_argument(
'--fake-tasks-amqp', action='store_true',
help='fake tasks with real AMQP'
)
run_parser.add_argument(
'--keepalive',
action='store_true',
help='run keep alive thread'
)
add_config_parameter(run_parser)
run_parser.add_argument(
'--fake-tasks-tick-count', action='store', type=int,
help='Fake tasks tick count'
)
run_parser.add_argument(
'--fake-tasks-tick-interval', action='store', type=int,
help='Fake tasks tick interval in seconds'
)
run_parser.add_argument(
'--authentication-method', action='store', type=str,
help='Choose authentication type',
choices=['none', 'fake', 'keystone'],
)
def load_db_parsers(subparsers):
subparsers.add_parser(
'syncdb', help='sync application database'
)
subparsers.add_parser(
'dropdb', help='drop application database'
)
# fixtures
loaddata_parser = subparsers.add_parser(
'loaddata', help='load data from fixture'
)
loaddata_parser.add_argument(
'fixture', action='store', help='json fixture to load'
)
dumpdata_parser = subparsers.add_parser(
'dumpdata', help='dump models as fixture'
)
dumpdata_parser.add_argument(
'model', action='store', help='model name to dump; underscored name'
'should be used, e.g. network_group for NetworkGroup model'
)
generate_parser = subparsers.add_parser(
'generate_nodes_fixture', help='generate new nodes fixture'
)
generate_parser.add_argument(
'-n', '--total-nodes', dest='total_nodes', action='store', type=int,
help='total nodes count to generate', required=True
)
generate_parser.add_argument(
'-e', '--error-nodes', dest='error_nodes', action='store', type=int,
help='error nodes count to generate'
)
generate_parser.add_argument(
'-o', '--offline-nodes', dest='offline_nodes', action='store', type=int,
help='offline nodes count to generate'
)
generate_parser.add_argument(
'-i', '--min-ifaces-num', dest='min_ifaces_num', action='store',
type=int, default=1,
help='minimal number of ethernet interfaces for node'
)
subparsers.add_parser(
'loaddefault',
help='load data from default fixtures (settings.FIXTURES_TO_UPLOAD) '
'and apply fake deployment tasks for all releases in database'
)
def load_alembic_parsers(migrate_parser):
alembic_parser = migrate_parser.add_subparsers(
dest="alembic_command",
help='alembic command'
)
for name in ['current', 'history', 'branches']:
parser = alembic_parser.add_parser(name)
for name in ['upgrade', 'downgrade']:
parser = alembic_parser.add_parser(name)
parser.add_argument('--delta', type=int)
parser.add_argument('--sql', action='store_true')
parser.add_argument('revision', nargs='?')
parser = alembic_parser.add_parser('stamp')
parser.add_argument('--sql', action='store_true')
parser.add_argument('revision')
parser = alembic_parser.add_parser('revision')
parser.add_argument('-m', '--message')
parser.add_argument('--autogenerate', action='store_true')
parser.add_argument('--sql', action='store_true')
def load_db_migrate_parsers(subparsers):
migrate_parser = subparsers.add_parser(
'migrate', help='dealing with DB migration'
)
load_alembic_parsers(migrate_parser)
def load_dbshell_parsers(subparsers):
dbshell_parser = subparsers.add_parser(
'dbshell', help='open database shell'
)
add_config_parameter(dbshell_parser)
def load_test_parsers(subparsers):
subparsers.add_parser(
'test', help='run unit tests'
)
def load_shell_parsers(subparsers):
shell_parser = subparsers.add_parser(
'shell', help='open python REPL'
)
add_config_parameter(shell_parser)
def load_settings_parsers(subparsers):
subparsers.add_parser(
'dump_settings', help='dump current settings to YAML'
)
def load_extensions_parsers(subparsers):
extensions_parser = subparsers.add_parser(
'extensions', help='extensions related actions')
load_alembic_parsers(extensions_parser)
def load_yaql_parsers(subparsers):
yaql_parser = subparsers.add_parser(
'yaql', help='run live YAQL console for cluster'
)
yaql_parser.add_argument(
'-c', '--cluster_id', dest='cluster_id', action='store', type=str,
help='cluster id'
)
def action_dumpdata(params):
import logging
logging.disable(logging.WARNING)
from nailgun.db.sqlalchemy import fixman
fixman.dump_fixture(params.model)
sys.exit(0)
def action_generate_nodes_fixture(params):
from oslo_serialization import jsonutils
from nailgun.logger import logger
from nailgun.utils import fake_generator
logger.info('Generating new nodes fixture...')
total_nodes_count = params.total_nodes
fixtures_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)),
'nailgun/fixtures/')
file_path = os.path.join(
fixtures_dir,
'{0}_fake_nodes_environment.json'.format(total_nodes_count)
)
generator = fake_generator.FakeNodesGenerator()
res = generator.generate_fake_nodes(
total_nodes_count, error_nodes_count=params.error_nodes,
offline_nodes_count=params.offline_nodes,
min_ifaces_num=params.min_ifaces_num)
with open(file_path, 'w') as file_to_write:
jsonutils.dump(res, file_to_write, indent=4)
logger.info('Done. New fixture was stored in {0} file'.format(file_path))
def action_loaddata(params):
from nailgun.db.sqlalchemy import fixman
from nailgun.logger import logger
logger.info("Uploading fixture...")
with open(params.fixture, "r") as fileobj:
fixman.upload_fixture(fileobj)
logger.info("Done")
def action_loadfakedeploymenttasks(params):
from nailgun.db.sqlalchemy import fixman
from nailgun.logger import logger
logger.info("Applying fake deployment tasks to all releases...")
fixman.load_fake_deployment_tasks()
logger.info("Done")
def action_loaddefault(params):
from nailgun.db.sqlalchemy import fixman
from nailgun.logger import logger
logger.info("Uploading fixture...")
fixman.upload_fixtures()
logger.info("Applying fake deployment tasks to all releases...")
fixman.load_fake_deployment_tasks()
logger.info("Done")
def action_syncdb(params):
from nailgun.db import syncdb
from nailgun.logger import logger
logger.info("Syncing database...")
syncdb()
logger.info("Done")
def action_dropdb(params):
from nailgun.db import dropdb
from nailgun.logger import logger
logger.info("Dropping database...")
dropdb()
logger.info("Done")
def action_migrate(params):
from nailgun.db.migration import action_migrate_alembic_core
action_migrate_alembic_core(params)
def action_extensions(params):
from nailgun.logger import logger
from nailgun.db.migration import action_migrate_alembic_extension
from nailgun.extensions import get_all_extensions
for extension in get_all_extensions():
if extension.alembic_migrations_path():
logger.info('Running command for extension {0}'.format(
extension.full_name()))
action_migrate_alembic_extension(params, extension=extension)
else:
logger.info(
'Extension {0} does not have migrations. '
'Skipping...'.format(extension.full_name()))
def action_test(params):
from nailgun.logger import logger
from nailgun.unit_test import TestRunner
logger.info("Running tests...")
TestRunner.run()
logger.info("Done")
def action_dbshell(params):
from nailgun.settings import settings
if params.config_file:
settings.update_from_file(params.config_file)
args = ['psql']
env = {}
if settings.DATABASE['passwd']:
env['PGPASSWORD'] = settings.DATABASE['passwd']
if settings.DATABASE['user']:
args += ["-U", settings.DATABASE['user']]
if settings.DATABASE['host']:
args.extend(["-h", settings.DATABASE['host']])
if settings.DATABASE['port']:
args.extend(["-p", str(settings.DATABASE['port'])])
args += [settings.DATABASE['name']]
if os.name == 'nt':
sys.exit(os.system(" ".join(args)))
else:
os.execvpe('psql', args, env)
def action_dump_settings(params):
from nailgun.settings import settings
sys.stdout.write(settings.dump())
def action_shell(params):
from nailgun.db import db
from nailgun.settings import settings
if params.config_file:
settings.update_from_file(params.config_file)
try:
from IPython import embed
embed()
except ImportError:
code.interact(local={'db': db, 'settings': settings})
def action_yaql(params):
from nailgun.fuyaql import fuyaql
fuyaql.main(params.cluster_id)
def action_run(params):
from nailgun.settings import settings
settings.update({
'LISTEN_PORT': int(params.port),
'LISTEN_ADDRESS': params.address,
})
for attr in ['FAKE_TASKS', 'FAKE_TASKS_TICK_COUNT',
'FAKE_TASKS_TICK_INTERVAL', 'FAKE_TASKS_AMQP']:
param = getattr(params, attr.lower())
if param is not None:
settings.update({attr: param})
if params.authentication_method:
auth_method = params.authentication_method
settings.AUTH.update({'AUTHENTICATION_METHOD': auth_method})
if params.config_file:
settings.update_from_file(params.config_file)
from nailgun.app import appstart
appstart()
if __name__ == "__main__":
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(
dest="action", help='actions'
)
load_run_parsers(subparsers)
load_db_parsers(subparsers)
load_db_migrate_parsers(subparsers)
load_dbshell_parsers(subparsers)
load_test_parsers(subparsers)
load_shell_parsers(subparsers)
load_settings_parsers(subparsers)
load_extensions_parsers(subparsers)
load_yaql_parsers(subparsers)
params, other_params = parser.parse_known_args()
sys.argv.pop(1)
action = getattr(
__main__,
"action_{0}".format(params.action)
)
action(params) if action else parser.print_help()

View File

@ -1,16 +0,0 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.handlers.base import forbid_client_caching
from nailgun.api.v1.handlers.base import load_db_driver

View File

@ -1,96 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handlers dealing with nodes assignment
"""
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.assignment import NodeAssignmentValidator
from nailgun.api.v1.validators.assignment import NodeUnassignmentValidator
from nailgun import consts
from nailgun import objects
class NodeAssignmentHandler(BaseHandler):
"""Node assignment handler"""
validator = NodeAssignmentValidator
@handle_errors
@validate
def POST(self, cluster_id):
""":returns: Empty string
:http: * 200 (nodes are successfully assigned)
* 400 (invalid nodes data specified)
* 404 (cluster/node not found in db)
"""
cluster = self.get_object_or_404(
objects.Cluster,
cluster_id
)
data = self.checked_data(
self.validator.validate_collection_update,
cluster_id=cluster.id
)
nodes = self.get_objects_list_or_404(
objects.NodeCollection,
data.keys()
)
for node in nodes:
update = {"cluster_id": cluster.id, "pending_roles": data[node.id]}
# NOTE(el): don't update pending_addition flag
# if node is already assigned to the cluster
# otherwise it would create problems for roles
# update
if not node.cluster:
update["pending_addition"] = True
objects.Node.update(node, update)
# fuel-client expects valid json for all put and post request
raise self.http(200, None)
class NodeUnassignmentHandler(BaseHandler):
"""Node assignment handler"""
validator = NodeUnassignmentValidator
@handle_errors
@validate
def POST(self, cluster_id):
""":returns: Empty string
:http: * 200 (node successfully unassigned)
* 404 (cluster/node not found in db)
* 400 (invalid data specified)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
nodes = self.checked_data(
self.validator.validate_collection_update,
cluster_id=cluster.id
)
for node in nodes:
if node.status == consts.NODE_STATUSES.discover:
objects.Node.remove_from_cluster(node)
objects.Node.update(node, {"pending_addition": False})
else:
objects.Node.update(node, {"pending_deletion": True})
raise self.http(200, None)

View File

@ -1,835 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from datetime import datetime
from decorator import decorator
from oslo_serialization import jsonutils
import six
import traceback
import yaml
from distutils.version import StrictVersion
from sqlalchemy import exc as sa_exc
import web
from nailgun.api.v1.validators.base import BaseDefferedTaskValidator
from nailgun.api.v1.validators.base import BasicValidator
from nailgun.api.v1.validators.orchestrator_graph import \
GraphSolverTasksValidator
from nailgun import consts
from nailgun.db import db
from nailgun import errors
from nailgun.logger import logger
from nailgun import objects
from nailgun.objects.serializers.base import BasicSerializer
from nailgun.orchestrator import orchestrator_graph
from nailgun.settings import settings
from nailgun import transactions
from nailgun import utils
def forbid_client_caching(handler):
if web.ctx.path.startswith("/api"):
web.header('Cache-Control',
'store, no-cache, must-revalidate,'
' post-check=0, pre-check=0')
web.header('Pragma', 'no-cache')
dt = datetime.fromtimestamp(0).strftime(
'%a, %d %b %Y %H:%M:%S GMT'
)
web.header('Expires', dt)
return handler()
def load_db_driver(handler):
"""Wrap all handlers calls so transaction is handled accordingly
rollback if something wrong or commit changes otherwise. Please note,
only HTTPError should be raised up from this function. All another
possible errors should be handled.
"""
try:
# execute handler and commit changes if all is ok
response = handler()
db.commit()
return response
except web.HTTPError:
# a special case: commit changes if http error ends with
# 200, 201, 202, etc
if web.ctx.status.startswith('2'):
db.commit()
else:
db.rollback()
raise
except (sa_exc.IntegrityError, sa_exc.DataError) as exc:
# respond a "400 Bad Request" if database constraints were broken
db.rollback()
raise BaseHandler.http(400, exc.message)
except Exception:
db.rollback()
raise
finally:
db.remove()
class BaseHandler(object):
validator = BasicValidator
serializer = BasicSerializer
fields = []
@classmethod
def render(cls, instance, fields=None):
return cls.serializer.serialize(
instance,
fields=fields or cls.fields
)
@classmethod
def http(cls, status_code, msg="", err_list=None, headers=None):
"""Raise an HTTP status code.
Useful for returning status
codes like 401 Unauthorized or 403 Forbidden.
:param status_code: the HTTP status code as an integer
:param msg: the message to send along, as a string
:param err_list: list of fields with errors
:param headers: the headers to send along, as a dictionary
"""
class _nocontent(web.HTTPError):
message = 'No Content'
def __init__(self):
super(_nocontent, self).__init__(
status='204 No Content',
data=self.message
)
class _range_not_satisfiable(web.HTTPError):
message = 'Requested Range Not Satisfiable'
def __init__(self):
super(_range_not_satisfiable, self).__init__(
status='416 Range Not Satisfiable',
data=self.message
)
exc_status_map = {
200: web.ok,
201: web.created,
202: web.accepted,
204: _nocontent,
301: web.redirect,
302: web.found,
400: web.badrequest,
401: web.unauthorized,
403: web.forbidden,
404: web.notfound,
405: web.nomethod,
406: web.notacceptable,
409: web.conflict,
410: web.gone,
415: web.unsupportedmediatype,
416: _range_not_satisfiable,
500: web.internalerror,
}
# web.py has a poor exception design: some of them receive
# the `message` argument and some of them not. the only
# solution to set custom message is to assign message directly
# to the `data` attribute. though, that won't work for
# the `internalerror` because it tries to do magic with
# application context without explicit `message` argument.
try:
exc = exc_status_map[status_code](message=msg)
except TypeError:
exc = exc_status_map[status_code]()
exc.data = msg
exc.err_list = err_list or []
exc.status_code = status_code
headers = headers or {}
for key, value in headers.items():
web.header(key, value)
return exc
@classmethod
def checked_data(cls, validate_method=None, **kwargs):
try:
data = kwargs.pop('data', web.data())
method = validate_method or cls.validator.validate
valid_data = method(data, **kwargs)
except (
errors.InvalidInterfacesInfo,
errors.InvalidMetadata
) as exc:
objects.Notification.create({
"topic": "error",
"message": exc.message
})
raise cls.http(400, exc.message)
except (
errors.NotAllowed
) as exc:
raise cls.http(403, exc.message)
except (
errors.AlreadyExists
) as exc:
raise cls.http(409, exc.message)
except (
errors.InvalidData,
errors.NodeOffline,
errors.NoDeploymentTasks,
errors.UnavailableRelease,
errors.CannotCreate,
errors.CannotUpdate,
errors.CannotDelete,
errors.CannotFindExtension,
) as exc:
raise cls.http(400, exc.message)
except (
errors.ObjectNotFound,
) as exc:
raise cls.http(404, exc.message)
except Exception as exc:
raise cls.http(500, traceback.format_exc())
return valid_data
def get_object_or_404(self, obj, *args, **kwargs):
"""Get object instance by ID
:http: 404 when not found
:returns: object instance
"""
log_404 = kwargs.pop("log_404", None)
log_get = kwargs.pop("log_get", None)
uid = kwargs.get("id", (args[0] if args else None))
if uid is None:
if log_404:
getattr(logger, log_404[0])(log_404[1])
raise self.http(404, u'Invalid ID specified')
else:
instance = obj.get_by_uid(uid)
if not instance:
raise self.http(404, u'{0} not found'.format(obj.__name__))
if log_get:
getattr(logger, log_get[0])(log_get[1])
return instance
def get_objects_list_or_404(self, obj, ids):
"""Get list of objects
:param obj: model object
:param ids: list of ids
:http: 404 when not found
:returns: list of object instances
"""
node_query = obj.filter_by_id_list(None, ids)
objects_count = obj.count(node_query)
if len(set(ids)) != objects_count:
raise self.http(404, '{0} not found'.format(obj.__name__))
return list(node_query)
def raise_task(self, task):
if task.status in [consts.TASK_STATUSES.ready,
consts.TASK_STATUSES.error]:
status = 200
else:
status = 202
raise self.http(status, objects.Task.to_json(task))
@staticmethod
def get_param_as_set(param_name, delimiter=',', default=None):
"""Parse array param from web.input()
:param param_name: parameter name in web.input()
:type param_name: str
:param delimiter: delimiter
:type delimiter: str
:returns: list of items
:rtype: set of str or None
"""
if param_name in web.input():
param = getattr(web.input(), param_name)
if param == '':
return set()
else:
return set(six.moves.map(
six.text_type.strip,
param.split(delimiter))
)
else:
return default
@staticmethod
def get_requested_mime():
accept = web.ctx.env.get("HTTP_ACCEPT", "application/json")
accept = accept.strip().split(',')[0]
accept = accept.split(';')[0]
return accept
def json_resp(data):
if isinstance(data, (dict, list)) or data is None:
return jsonutils.dumps(data)
else:
return data
@decorator
def handle_errors(func, cls, *args, **kwargs):
try:
return func(cls, *args, **kwargs)
except web.HTTPError as http_error:
if http_error.status_code != 204:
web.header('Content-Type', 'application/json', unique=True)
if http_error.status_code >= 400:
http_error.data = json_resp({
"message": http_error.data,
"errors": http_error.err_list
})
else:
http_error.data = json_resp(http_error.data)
raise
except errors.NailgunException as exc:
logger.exception('NailgunException occured')
http_error = BaseHandler.http(400, exc.message)
web.header('Content-Type', 'text/plain')
raise http_error
# intercepting all errors to avoid huge HTML output
except Exception as exc:
logger.exception('Unexpected exception occured')
http_error = BaseHandler.http(
500,
(
traceback.format_exc(exc)
if settings.DEVELOPMENT
else 'Unexpected exception, please check logs'
)
)
http_error.data = json_resp(http_error.data)
web.header('Content-Type', 'text/plain')
raise http_error
@decorator
def validate(func, cls, *args, **kwargs):
request_validation_needed = True
resource_type = "single"
if issubclass(
cls.__class__,
CollectionHandler
) and not func.func_name == "POST":
resource_type = "collection"
if (
func.func_name in ("GET", "DELETE") or
getattr(cls.__class__, 'validator', None) is None or
(resource_type == "single" and not cls.validator.single_schema) or
(resource_type == "collection" and not cls.validator.collection_schema)
):
request_validation_needed = False
if request_validation_needed:
BaseHandler.checked_data(
cls.validator.validate_request,
resource_type=resource_type
)
return func(cls, *args, **kwargs)
@decorator
def serialize(func, cls, *args, **kwargs):
"""Set context-type of response based on Accept header.
This decorator checks Accept header received from client
and returns corresponding wrapper (only JSON is currently
supported). It can be used as is:
@handle_errors
@validate
@serialize
def GET(self):
...
"""
accepted_types = (
"application/json",
"application/x-yaml",
"*/*"
)
accept = cls.get_requested_mime()
if accept not in accepted_types:
raise BaseHandler.http(415)
resp = func(cls, *args, **kwargs)
if accept == 'application/x-yaml':
web.header('Content-Type', 'application/x-yaml', unique=True)
return yaml.dump(resp, default_flow_style=False)
else:
# default is json
web.header('Content-Type', 'application/json', unique=True)
return jsonutils.dumps(resp)
class SingleHandler(BaseHandler):
single = None
validator = BasicValidator
@handle_errors
@serialize
def GET(self, obj_id):
""":returns: JSONized REST object.
:http: * 200 (OK)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(self.single, obj_id)
return self.single.to_dict(obj)
@handle_errors
@validate
@serialize
def PUT(self, obj_id):
""":returns: JSONized REST object.
:http: * 200 (OK)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(self.single, obj_id)
data = self.checked_data(
self.validator.validate_update,
instance=obj
)
self.single.update(obj, data)
return self.single.to_dict(obj)
@handle_errors
@validate
def DELETE(self, obj_id):
""":returns: Empty string
:http: * 204 (object successfully deleted)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(
self.single,
obj_id
)
self.checked_data(
self.validator.validate_delete,
instance=obj
)
self.single.delete(obj)
raise self.http(204)
class Pagination(object):
"""Get pagination scope from init or HTTP request arguments"""
def convert(self, x):
"""ret. None if x=None, else ret. x as int>=0; else raise 400"""
val = x
if val is not None:
if type(val) is not int:
try:
val = int(x)
except ValueError:
raise BaseHandler.http(400, 'Cannot convert "%s" to int'
% x)
# raise on negative values
if val < 0:
raise BaseHandler.http(400, 'Negative limit/offset not \
allowed')
return val
def get_order_by(self, order_by):
if order_by:
order_by = [s.strip() for s in order_by.split(',') if s.strip()]
return order_by if order_by else None
def __init__(self, limit=None, offset=None, order_by=None):
if limit is not None or offset is not None or order_by is not None:
# init with provided arguments
self.limit = self.convert(limit)
self.offset = self.convert(offset)
self.order_by = self.get_order_by(order_by)
else:
# init with HTTP arguments
self.limit = self.convert(web.input(limit=None).limit)
self.offset = self.convert(web.input(offset=None).offset)
self.order_by = self.get_order_by(web.input(order_by=None)
.order_by)
class CollectionHandler(BaseHandler):
collection = None
validator = BasicValidator
eager = ()
def get_scoped_query_and_range(self, pagination=None, filter_by=None):
"""Get filtered+paged collection query and collection.ContentRange obj
Return a scoped query, and if pagination is requested then also return
ContentRange object (see NailgunCollection.content_range) to allow to
set Content-Range header (outside of this functon).
If pagination is not set/requested, return query to all collection's
objects.
Allows getting object count without getting objects - via
content_range if pagination.limit=0.
:param pagination: Pagination object
:param filter_by: filter dict passed to query.filter_by(\*\*dict)
:type filter_by: dict
:returns: SQLAlchemy query and ContentRange object
"""
pagination = pagination or Pagination()
query = None
content_range = None
if self.collection and self.collection.single.model:
query, content_range = self.collection.scope(pagination, filter_by)
if content_range:
if not content_range.valid:
raise self.http(416, 'Requested range "%s" cannot be '
'satisfied' % content_range)
return query, content_range
def set_content_range(self, content_range):
"""Set Content-Range header to indicate partial data
:param content_range: NailgunCollection.content_range named tuple
"""
txt = 'objects {x.first}-{x.last}/{x.total}'.format(x=content_range)
web.header('Content-Range', txt)
@handle_errors
@validate
@serialize
def GET(self):
""":returns: Collection of JSONized REST objects.
:http: * 200 (OK)
* 400 (Bad Request)
* 406 (requested range not satisfiable)
"""
query, content_range = self.get_scoped_query_and_range()
if content_range:
self.set_content_range(content_range)
q = self.collection.eager(query, self.eager)
return self.collection.to_list(q)
@handle_errors
@validate
def POST(self):
""":returns: JSONized REST object.
:http: * 201 (object successfully created)
* 400 (invalid object data specified)
* 409 (object with such parameters already exists)
"""
data = self.checked_data()
try:
new_obj = self.collection.create(data)
except errors.CannotCreate as exc:
raise self.http(400, exc.message)
raise self.http(201, self.collection.single.to_json(new_obj))
class DBSingletonHandler(BaseHandler):
"""Manages an object that is supposed to have only one entry in the DB"""
single = None
validator = BasicValidator
not_found_error = "Object not found in the DB"
def get_one_or_404(self):
try:
instance = self.single.get_one(fail_if_not_found=True)
except errors.ObjectNotFound:
raise self.http(404, self.not_found_error)
return instance
@handle_errors
@validate
@serialize
def GET(self):
"""Get singleton object from DB
:http: * 200 (OK)
* 404 (Object not found in DB)
"""
instance = self.get_one_or_404()
return self.single.to_dict(instance)
@handle_errors
@validate
@serialize
def PUT(self):
"""Change object in DB
:http: * 200 (OK)
* 400 (Invalid data)
* 404 (Object not present in DB)
"""
data = self.checked_data(self.validator.validate_update)
instance = self.get_one_or_404()
self.single.update(instance, data)
return self.single.to_dict(instance)
@handle_errors
@validate
@serialize
def PATCH(self):
"""Update object
:http: * 200 (OK)
* 400 (Invalid data)
* 404 (Object not present in DB)
"""
data = self.checked_data(self.validator.validate_update)
instance = self.get_one_or_404()
instance.update(utils.dict_merge(
self.single.serializer.serialize(instance), data
))
return self.single.to_dict(instance)
class OrchestratorDeploymentTasksHandler(SingleHandler):
"""Handler for deployment graph serialization."""
validator = GraphSolverTasksValidator
@handle_errors
@validate
@serialize
def GET(self, obj_id):
""":returns: Deployment tasks
:http: * 200 OK
* 404 (object not found)
"""
obj = self.get_object_or_404(self.single, obj_id)
end = web.input(end=None).end
start = web.input(start=None).start
graph_type = web.input(graph_type=None).graph_type or None
# web.py depends on [] to understand that there will be multiple inputs
include = web.input(include=[]).include
# merged (cluster + plugins + release) tasks is returned for cluster
# but the own release tasks is returned for release
tasks = self.single.get_deployment_tasks(obj, graph_type=graph_type)
if end or start:
graph = orchestrator_graph.GraphSolver(tasks)
for t in tasks:
if StrictVersion(t.get('version')) >= \
StrictVersion(consts.TASK_CROSS_DEPENDENCY):
raise self.http(400, (
'Both "start" and "end" parameters are not allowed '
'for task-based deployment.'))
try:
return graph.filter_subgraph(
end=end, start=start, include=include).node.values()
except errors.TaskNotFound as e:
raise self.http(400, 'Cannot find task {0} by its '
'name.'.format(e.task_name))
return tasks
@handle_errors
@validate
@serialize
def PUT(self, obj_id):
""":returns: Deployment tasks
:http: * 200 (OK)
* 400 (invalid data specified)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(self.single, obj_id)
graph_type = web.input(graph_type=None).graph_type or None
data = self.checked_data(
self.validator.validate_update,
instance=obj
)
deployment_graph = objects.DeploymentGraph.get_for_model(
obj, graph_type=graph_type)
if deployment_graph:
objects.DeploymentGraph.update(
deployment_graph, {'tasks': data})
else:
deployment_graph = objects.DeploymentGraph.create_for_model(
{'tasks': data}, obj, graph_type=graph_type)
return objects.DeploymentGraph.get_tasks(deployment_graph)
def POST(self, obj_id):
"""Creation of metadata disallowed
:http: * 405 (method not supported)
"""
raise self.http(405, 'Create not supported for this entity')
def DELETE(self, obj_id):
"""Deletion of metadata disallowed
:http: * 405 (method not supported)
"""
raise self.http(405, 'Delete not supported for this entity')
class TransactionExecutorHandler(BaseHandler):
def start_transaction(self, cluster, options):
"""Starts new transaction.
:param cluster: the cluster object
:param options: the transaction parameters
:return: JSONized task object
"""
try:
manager = transactions.TransactionsManager(cluster.id)
self.raise_task(manager.execute(**options))
except errors.ObjectNotFound as e:
raise self.http(404, e.message)
except errors.DeploymentAlreadyStarted as e:
raise self.http(409, e.message)
except errors.InvalidData as e:
raise self.http(400, e.message)
# TODO(enchantner): rewrite more handlers to inherit from this
# and move more common code here, this is deprecated handler
class DeferredTaskHandler(TransactionExecutorHandler):
"""Abstract Deferred Task Handler"""
validator = BaseDefferedTaskValidator
single = objects.Task
log_message = u"Starting deferred task on environment '{env_id}'"
log_error = u"Error during execution of deferred task " \
u"on environment '{env_id}': {error}"
task_manager = None
@classmethod
def get_options(cls):
return {}
@classmethod
def get_transaction_options(cls, cluster, options):
"""Finds graph for this action."""
return None
@handle_errors
@validate
def PUT(self, cluster_id):
""":returns: JSONized Task object.
:http: * 202 (task successfully executed)
* 400 (invalid object data specified)
* 404 (environment is not found)
* 409 (task with such parameters already exists)
"""
cluster = self.get_object_or_404(
objects.Cluster,
cluster_id,
log_404=(
u"warning",
u"Error: there is no cluster "
u"with id '{0}' in DB.".format(cluster_id)
)
)
logger.info(self.log_message.format(env_id=cluster_id))
try:
options = self.get_options()
except ValueError as e:
raise self.http(400, six.text_type(e))
try:
self.validator.validate(cluster)
except errors.NailgunException as e:
raise self.http(400, e.message)
if objects.Release.is_lcm_supported(cluster.release):
# try to get new graph to run transaction manager
try:
transaction_options = self.get_transaction_options(
cluster, options
)
except errors.NailgunException as e:
logger.exception("Failed to get transaction options")
raise self.http(400, msg=six.text_type(e))
if transaction_options:
return self.start_transaction(cluster, transaction_options)
try:
task_manager = self.task_manager(cluster_id=cluster.id)
task = task_manager.execute(**options)
except (
errors.AlreadyExists,
errors.StopAlreadyRunning
) as exc:
raise self.http(409, exc.message)
except (
errors.DeploymentNotRunning,
errors.NoDeploymentTasks,
errors.WrongNodeStatus,
errors.UnavailableRelease,
errors.CannotBeStopped,
) as exc:
raise self.http(400, exc.message)
except Exception as exc:
logger.error(
self.log_error.format(
env_id=cluster_id,
error=str(exc)
)
)
# let it be 500
raise
self.raise_task(task)

View File

@ -1,151 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import codecs
import cStringIO
import csv
from hashlib import md5
import tempfile
import six
import web
from nailgun import objects
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.task.manager import GenerateCapacityLogTaskManager
"""
Capacity audit handlers
"""
class UnicodeWriter(object):
"""Unicode CSV writer.
A CSV writer which will write rows to CSV file "f",
which is encoded in the given encoding.
Source: http://docs.python.org/2/library/csv.html#examples
"""
def __init__(self, f, dialect=csv.excel, encoding="utf-8", **kwds):
# Redirect output to a queue
self.queue = cStringIO.StringIO()
self.writer = csv.writer(self.queue, dialect=dialect, **kwds)
self.stream = f
self.encoder = codecs.getincrementalencoder(encoding)()
def writerow(self, row):
# We have only string and int types in capacity log now.
# Don't need to convert int values to string for writhing it to file.
self.writer.writerow(
[s.encode("utf-8") if type(s) != int else s for s in row])
# Fetch UTF-8 output from the queue ...
data = self.queue.getvalue()
data = data.decode("utf-8")
# ... and reencode it into the target encoding
data = self.encoder.encode(data)
# write to the target stream
self.stream.write(data)
# empty queue
self.queue.truncate(0)
def writerows(self, rows):
for row in rows:
self.writerow(row)
class CapacityLogHandler(BaseHandler):
"""Task single handler"""
fields = (
"id",
"report"
)
@handle_errors
@validate
@serialize
def GET(self):
capacity_log = objects.CapacityLog.get_latest()
if not capacity_log:
raise self.http(404)
return self.render(capacity_log)
@handle_errors
@validate
def PUT(self):
"""Starts capacity data generation.
:returns: JSONized Task object.
:http: * 200 (setup task successfully executed)
* 202 (setup task created and started)
* 400 (data validation failed)
* 404 (cluster not found in db)
"""
# TODO(pkaminski): this seems to be synchronous, no task needed here
manager = GenerateCapacityLogTaskManager()
task = manager.execute()
self.raise_task(task)
class CapacityLogCsvHandler(BaseHandler):
def GET(self):
capacity_log = objects.CapacityLog.get_latest()
if not capacity_log:
raise self.http(404)
report = capacity_log.report
f = tempfile.TemporaryFile(mode='r+b')
csv_file = UnicodeWriter(f, delimiter=',',
quotechar='|', quoting=csv.QUOTE_MINIMAL)
csv_file.writerow(['Fuel version', report['fuel_data']['release']])
csv_file.writerow(['Fuel UUID', report['fuel_data']['uuid']])
csv_file.writerow(['Environment Name', 'Node Count'])
for stat in report['environment_stats']:
csv_file.writerow([stat['cluster'], stat['nodes']])
csv_file.writerow(['Total number allocated of nodes',
report['allocation_stats']['allocated']])
csv_file.writerow(['Total number of unallocated nodes',
report['allocation_stats']['unallocated']])
csv_file.writerow([])
csv_file.writerow(['Node role(s)',
'Number of nodes with this configuration'])
for roles, count in six.iteritems(report['roles_stat']):
csv_file.writerow([roles, count])
f.seek(0)
checksum = md5(f.read()).hexdigest()
csv_file.writerow([])
csv_file.writerow(['Checksum', checksum])
filename = 'fuel-capacity-audit.csv'
web.header('Content-Type', 'application/octet-stream')
web.header('Content-Disposition', 'attachment; filename="%s"' % (
filename))
web.header('Content-Length', f.tell())
f.seek(0)
return f

View File

@ -1,511 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handlers dealing with clusters
"""
import traceback
import web
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.base import DeferredTaskHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import OrchestratorDeploymentTasksHandler
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import SingleHandler
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.handlers.deployment_graph import \
RelatedDeploymentGraphCollectionHandler
from nailgun.api.v1.handlers.deployment_graph import \
RelatedDeploymentGraphHandler
from nailgun.api.v1.validators.cluster import ClusterAttributesValidator
from nailgun.api.v1.validators.cluster import ClusterChangesValidator
from nailgun.api.v1.validators.cluster import ClusterStopDeploymentValidator
from nailgun.api.v1.validators.cluster import ClusterValidator
from nailgun.api.v1.validators.extension import ExtensionValidator
from nailgun import errors
from nailgun.extensions import remove_extensions_from_object
from nailgun.extensions import update_extensions_for_object
from nailgun.logger import logger
from nailgun import objects
from nailgun import utils
from nailgun.task.manager import ApplyChangesTaskManager
from nailgun.task.manager import ClusterDeletionManager
from nailgun.task.manager import ResetEnvironmentTaskManager
from nailgun.task.manager import StopDeploymentTaskManager
class ClusterHandler(SingleHandler):
"""Cluster single handler"""
single = objects.Cluster
validator = ClusterValidator
@handle_errors
@validate
@serialize
def PUT(self, obj_id):
""":returns: JSONized Cluster object.
:http: * 200 (OK)
* 400 (error occured while processing of data)
* 404 (cluster not found in db)
"""
obj = self.get_object_or_404(self.single, obj_id)
data = self.checked_data(
self.validator.validate_update,
instance=obj
)
# NOTE(aroma):if node is being assigned to the cluster, and if network
# template has been set for the cluster, network template will
# also be applied to node; in such case relevant errors might
# occur so they must be handled in order to form proper HTTP
# response for user
try:
self.single.update(obj, data)
except errors.NetworkTemplateCannotBeApplied as exc:
raise self.http(400, exc.message)
return self.single.to_dict(obj)
@handle_errors
@validate
def DELETE(self, obj_id):
""":returns: {}
:http: * 202 (cluster deletion process launched)
* 400 (failed to execute cluster deletion process)
* 404 (cluster not found in db)
"""
cluster = self.get_object_or_404(self.single, obj_id)
task_manager = ClusterDeletionManager(cluster_id=cluster.id)
try:
logger.debug('Trying to execute cluster deletion task')
task = task_manager.execute(
force=utils.parse_bool(web.input(force='0').force)
)
except Exception as e:
logger.warn('Error while execution '
'cluster deletion task: %s' % str(e))
logger.warn(traceback.format_exc())
raise self.http(400, str(e))
raise self.http(202, objects.Task.to_json(task))
class ClusterCollectionHandler(CollectionHandler):
"""Cluster collection handler"""
collection = objects.ClusterCollection
validator = ClusterValidator
class ClusterChangesHandler(DeferredTaskHandler):
log_message = u"Trying to start deployment at environment '{env_id}'"
log_error = u"Error during execution of deployment " \
u"task on environment '{env_id}': {error}"
task_manager = ApplyChangesTaskManager
validator = ClusterChangesValidator
@classmethod
def get_transaction_options(cls, cluster, options):
"""Find sequence 'default' to use for deploy-changes handler."""
sequence = objects.DeploymentSequence.get_by_name_for_release(
cluster.release, 'deploy-changes'
)
if sequence:
return {
'dry_run': options['dry_run'],
'noop_run': options['noop_run'],
'force': options['force'],
'graphs': sequence.graphs,
}
@classmethod
def get_options(cls):
data = web.input(graph_type=None, dry_run="0", noop_run="0")
return {
'graph_type': data.graph_type or None,
'force': False,
'dry_run': utils.parse_bool(data.dry_run),
'noop_run': utils.parse_bool(data.noop_run),
}
class ClusterChangesForceRedeployHandler(ClusterChangesHandler):
log_message = u"Trying to force deployment of the environment '{env_id}'"
log_error = u"Error during execution of a forced deployment task " \
u"on environment '{env_id}': {error}"
@classmethod
def get_options(cls):
data = web.input(graph_type=None, dry_run="0", noop_run="0")
return {
'graph_type': data.graph_type or None,
'force': True,
'dry_run': utils.parse_bool(data.dry_run),
'noop_run': utils.parse_bool(data.noop_run),
}
class ClusterStopDeploymentHandler(DeferredTaskHandler):
log_message = u"Trying to stop deployment on environment '{env_id}'"
log_error = u"Error during execution of deployment " \
u"stopping task on environment '{env_id}': {error}"
task_manager = StopDeploymentTaskManager
validator = ClusterStopDeploymentValidator
class ClusterResetHandler(DeferredTaskHandler):
log_message = u"Trying to reset environment '{env_id}'"
log_error = u"Error during execution of resetting task " \
u"on environment '{env_id}': {error}"
task_manager = ResetEnvironmentTaskManager
@classmethod
def get_options(cls):
return {
'force': utils.parse_bool(web.input(force='0').force)
}
class ClusterAttributesHandler(BaseHandler):
"""Cluster attributes handler"""
fields = (
"editable",
)
validator = ClusterAttributesValidator
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
""":returns: JSONized Cluster attributes.
:http: * 200 (OK)
* 404 (cluster not found in db)
* 500 (cluster has no attributes)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
if not cluster.attributes:
raise self.http(500, "No attributes found!")
return {
'editable': objects.Cluster.get_editable_attributes(
cluster, all_plugins_versions=True)
}
def PUT(self, cluster_id):
""":returns: JSONized Cluster attributes.
:http: * 200 (OK)
* 400 (wrong attributes data specified)
* 404 (cluster not found in db)
* 500 (cluster has no attributes)
"""
# Due to the fact that we don't support PATCH requests and we're
# using PUT requests for the same purpose with non-complete data,
# let's follow DRY principle and call PATCH handler for now.
# In future, we have to use PUT method for overwrite the whole
# entity and PATCH method for changing its parts.
return self.PATCH(cluster_id)
@handle_errors
@validate
@serialize
def PATCH(self, cluster_id):
""":returns: JSONized Cluster attributes.
:http: * 200 (OK)
* 400 (wrong attributes data specified)
* 403 (attribute changing is not allowed)
* 404 (cluster not found in db)
* 500 (cluster has no attributes)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
if not cluster.attributes:
raise self.http(500, "No attributes found!")
force = utils.parse_bool(web.input(force='0').force)
data = self.checked_data(cluster=cluster, force=force)
try:
objects.Cluster.patch_attributes(cluster, data)
except errors.NailgunException as exc:
raise self.http(400, exc.message)
return {
'editable': objects.Cluster.get_editable_attributes(
cluster, all_plugins_versions=True)
}
class ClusterAttributesDefaultsHandler(BaseHandler):
"""Cluster default attributes handler"""
fields = (
"editable",
)
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
""":returns: JSONized default Cluster attributes.
:http: * 200 (OK)
* 404 (cluster not found in db)
* 500 (cluster has no attributes)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
attrs = objects.Cluster.get_default_editable_attributes(cluster)
if not attrs:
raise self.http(500, "No attributes found!")
return {"editable": attrs}
@handle_errors
@validate
@serialize
def PUT(self, cluster_id):
""":returns: JSONized Cluster attributes.
:http: * 200 (OK)
* 400 (wrong attributes data specified)
* 404 (cluster not found in db)
* 500 (cluster has no attributes)
"""
cluster = self.get_object_or_404(
objects.Cluster,
cluster_id,
log_404=(
"error",
"There is no cluster "
"with id '{0}' in DB.".format(cluster_id)
)
)
if not cluster.attributes:
logger.error('ClusterAttributesDefaultsHandler: no attributes'
' found for cluster_id %s' % cluster_id)
raise self.http(500, "No attributes found!")
cluster.attributes.editable = (
objects.Cluster.get_default_editable_attributes(cluster))
objects.Cluster.add_pending_changes(cluster, "attributes")
logger.debug('ClusterAttributesDefaultsHandler:'
' editable attributes for cluster_id %s were reset'
' to default' % cluster_id)
return {"editable": cluster.attributes.editable}
class ClusterAttributesDeployedHandler(BaseHandler):
"""Cluster deployed attributes handler"""
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
""":returns: JSONized deployed Cluster editable attributes with plugins
:http: * 200 (OK)
* 404 (cluster not found in db)
* 404 (cluster does not have deployed attributes)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
attrs = objects.Transaction.get_cluster_settings(
objects.TransactionCollection.get_last_succeed_run(cluster)
)
if not attrs:
raise self.http(
404, "Cluster does not have deployed attributes!"
)
return attrs
class ClusterGeneratedData(BaseHandler):
"""Cluster generated data"""
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
""":returns: JSONized cluster generated data
:http: * 200 (OK)
* 404 (cluster not found in db)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
return cluster.attributes.generated
class ClusterDeploymentTasksHandler(OrchestratorDeploymentTasksHandler):
"""Cluster Handler for deployment graph serialization."""
single = objects.Cluster
class ClusterPluginsDeploymentTasksHandler(BaseHandler):
"""Handler for cluster plugins merged deployment tasks serialization."""
single = objects.Cluster
@handle_errors
@validate
@serialize
def GET(self, obj_id):
""":returns: Deployment tasks
:http: * 200 OK
* 404 (object not found)
"""
obj = self.get_object_or_404(self.single, obj_id)
graph_type = web.input(graph_type=None).graph_type or None
tasks = self.single.get_plugins_deployment_tasks(
obj, graph_type=graph_type)
return tasks
class ClusterReleaseDeploymentTasksHandler(BaseHandler):
"""Handler for cluster release deployment tasks serialization."""
single = objects.Cluster
@handle_errors
@validate
@serialize
def GET(self, obj_id):
""":returns: Deployment tasks
:http: * 200 OK
* 404 (object not found)
"""
obj = self.get_object_or_404(self.single, obj_id)
graph_type = web.input(graph_type=None).graph_type or None
tasks = self.single.get_release_deployment_tasks(
obj, graph_type=graph_type)
return tasks
class ClusterOwnDeploymentTasksHandler(BaseHandler):
"""Handler for cluster own deployment tasks serialization."""
single = objects.Cluster
@handle_errors
@validate
@serialize
def GET(self, obj_id):
""":returns: Cluster own deployment tasks
:http: * 200 OK
* 404 (object not found)
"""
obj = self.get_object_or_404(self.single, obj_id)
graph_type = web.input(graph_type=None).graph_type or None
tasks = self.single.get_own_deployment_tasks(
obj, graph_type=graph_type)
return tasks
class ClusterDeploymentGraphCollectionHandler(
RelatedDeploymentGraphCollectionHandler):
"""Cluster Handler for deployment graphs configuration."""
related = objects.Cluster
class ClusterExtensionsHandler(BaseHandler):
"""Cluster extensions handler"""
validator = ExtensionValidator
def _get_cluster_obj(self, cluster_id):
return self.get_object_or_404(
objects.Cluster, cluster_id,
log_404=(
"error",
"There is no cluster with id '{0}' in DB.".format(cluster_id)
)
)
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
""":returns: JSONized list of enabled Cluster extensions
:http: * 200 (OK)
* 404 (cluster not found in db)
"""
cluster = self._get_cluster_obj(cluster_id)
return cluster.extensions
@handle_errors
@validate
@serialize
def PUT(self, cluster_id):
""":returns: JSONized list of enabled Cluster extensions
:http: * 200 (OK)
* 400 (there is no such extension available)
* 404 (cluster not found in db)
"""
cluster = self._get_cluster_obj(cluster_id)
data = self.checked_data()
update_extensions_for_object(cluster, data)
return cluster.extensions
@handle_errors
@validate
def DELETE(self, cluster_id):
"""Disables the extensions for specified cluster
Takes (JSONed) list of extension names to disable.
:http: * 204 (OK)
* 400 (there is no such extension enabled)
* 404 (cluster not found in db)
"""
cluster = self._get_cluster_obj(cluster_id)
# TODO(agordeev): web.py does not support parsing of array arguments
# in the queryset so we specify the input as comma-separated list
extension_names = self.get_param_as_set('extension_names', default=[])
data = self.checked_data(self.validator.validate_delete,
data=extension_names, cluster=cluster)
remove_extensions_from_object(cluster, data)
raise self.http(204)
class ClusterDeploymentGraphHandler(RelatedDeploymentGraphHandler):
"""Cluster Handler for deployment graph configuration."""
related = objects.Cluster

View File

@ -1,118 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.handlers import base
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators import cluster_plugin_link
from nailgun import errors
from nailgun import objects
class ClusterPluginLinkHandler(base.SingleHandler):
validator = cluster_plugin_link.ClusterPluginLinkValidator
single = objects.ClusterPluginLink
@handle_errors
@validate
@serialize
def GET(self, cluster_id, obj_id):
""":returns: JSONized REST object.
:http: * 200 (OK)
* 404 (dashboard entry not found in db)
"""
self.get_object_or_404(objects.Cluster, cluster_id)
obj = self.get_object_or_404(self.single, obj_id)
return self.single.to_dict(obj)
@handle_errors
@validate
@serialize
def PUT(self, cluster_id, obj_id):
""":returns: JSONized REST object.
:http: * 200 (OK)
* 400 (invalid object data specified)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(self.single, obj_id)
data = self.checked_data(
self.validator.validate_update,
instance=obj
)
self.single.update(obj, data)
return self.single.to_dict(obj)
def PATCH(self, cluster_id, obj_id):
""":returns: JSONized REST object.
:http: * 200 (OK)
* 400 (invalid object data specified)
* 404 (object not found in db)
"""
return self.PUT(cluster_id, obj_id)
@handle_errors
@validate
def DELETE(self, cluster_id, obj_id):
""":returns: JSONized REST object.
:http: * 204 (OK)
* 404 (object not found in db)
"""
d_e = self.get_object_or_404(self.single, obj_id)
self.single.delete(d_e)
raise self.http(204)
class ClusterPluginLinkCollectionHandler(base.CollectionHandler):
collection = objects.ClusterPluginLinkCollection
validator = cluster_plugin_link.ClusterPluginLinkValidator
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
""":returns: Collection of JSONized ClusterPluginLink objects.
:http: * 200 (OK)
* 404 (cluster not found in db)
"""
self.get_object_or_404(objects.Cluster, cluster_id)
return self.collection.to_list(
self.collection.get_by_cluster_id(cluster_id)
)
@handle_errors
@validate
def POST(self, cluster_id):
""":returns: JSONized REST object.
:http: * 201 (object successfully created)
* 400 (invalid object data specified)
"""
data = self.checked_data(cluster_id=cluster_id)
try:
new_obj = self.collection.create_with_cluster_id(data, cluster_id)
except errors.CannotCreate as exc:
raise self.http(400, exc.message)
raise self.http(201, self.collection.single.to_json(new_obj))

View File

@ -1,47 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.objects import Release
from nailgun.objects.serializers.release import ComponentSerializer
class ComponentCollectionHandler(CollectionHandler):
"""Component collection handler"""
@handle_errors
@validate
@serialize
def GET(self, release_id):
""":returns: JSONized component data for release and related plugins.
:http: * 200 (OK)
* 404 (release not found in db)
"""
release = self.get_object_or_404(Release, release_id)
components = Release.get_all_components(release)
return [ComponentSerializer.serialize(c) for c in components]
def POST(self, release_id):
"""Creating of components is disallowed
:http: * 405 (method not supported)
"""
raise self.http(405, 'Create not supported for this entity')

View File

@ -1,320 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.handlers.base import TransactionExecutorHandler
import web
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import SingleHandler
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators import deployment_graph as validators
from nailgun import objects
from nailgun.objects.serializers.deployment_graph import \
DeploymentGraphSerializer
from nailgun import utils
class RelatedDeploymentGraphHandler(SingleHandler):
"""Handler for deployment graph related to model."""
validator = validators.DeploymentGraphValidator
serializer = DeploymentGraphSerializer
single = objects.DeploymentGraph
related = None # related should be substituted during handler inheritance
@handle_errors
@validate
@serialize
def GET(self, obj_id, graph_type=None):
"""Get deployment graph.
:param obj_id: related model ID
:type obj_id: int|basestring
:param graph_type: deployment graph type, default is 'default'
:type graph_type: basestring
:returns: Deployment graph
:rtype: dict
:http: * 200 OK
* 400 (no graph of such type)
* 404 (release object not found)
"""
obj = self.get_object_or_404(self.related, int(obj_id))
deployment_graph = self.single.get_for_model(obj, graph_type)
if deployment_graph:
return self.single.to_dict(deployment_graph)
else:
raise self.http(404, "Graph with type: {0} is not defined".format(
graph_type))
@handle_errors
@validate
@serialize
def POST(self, obj_id, graph_type=None):
"""Create deployment graph.
:param obj_id: related model ID
:type obj_id: int|basestring
:param graph_type: deployment graph type, default is 'default'
:type graph_type: basestring
:returns: Deployment graph data
:rtype: dict
:http: * 200 (OK)
* 400 (invalid data specified)
* 409 (object already exists)
"""
obj = self.get_object_or_404(self.related, int(obj_id))
data = self.checked_data()
deployment_graph = self.single.get_for_model(obj, graph_type)
if deployment_graph:
raise self.http(409, 'Deployment graph with type "{0}" already '
'exist.'.format(graph_type))
else:
deployment_graph = self.single.create_for_model(
data, obj, graph_type=graph_type)
return self.single.to_dict(deployment_graph)
@handle_errors
@validate
@serialize
def PUT(self, obj_id, graph_type=None):
"""Update deployment graph.
:param obj_id: related model ID
:type obj_id: int|basestring
:param graph_type: deployment graph type, default is 'default'
:type graph_type: basestring
:returns: Deployment graph data
:rtype: dict
:http: * 200 (OK)
* 400 (invalid data specified)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(self.related, int(obj_id))
data = self.checked_data()
deployment_graph = self.single.get_for_model(obj, graph_type)
if deployment_graph:
self.single.update(deployment_graph, data)
return self.single.to_dict(deployment_graph)
else:
raise self.http(404, "Graph with type: {0} is not defined".format(
graph_type))
@handle_errors
@validate
@serialize
def PATCH(self, obj_id, graph_type=None):
"""Update deployment graph.
:param obj_id: related model ID
:type obj_id: int|basestring
:param graph_type: deployment graph type, default is 'default'
:type graph_type: basestring
:returns: Deployment graph data
:rtype: dict
:http: * 200 (OK)
* 400 (invalid data specified)
* 404 (object not found in db)
"""
return self.PUT(obj_id, graph_type)
@handle_errors
@validate
def DELETE(self, obj_id, graph_type=None):
"""Delete deployment graph.
:param obj_id: related model ID
:type obj_id: int|basestring
:param graph_type: deployment graph type, default is 'default'
:type graph_type: basestring
:http: * 204 (OK)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(self.related, int(obj_id))
deployment_graph = self.single.get_for_model(obj, graph_type)
if deployment_graph:
self.single.delete(deployment_graph)
raise self.http(204)
else:
raise self.http(404, "Graph with type: {0} is not defined".format(
graph_type))
class RelatedDeploymentGraphCollectionHandler(CollectionHandler):
"""Handler for deployment graphs related to the models collection."""
validator = validators.DeploymentGraphValidator
collection = objects.DeploymentGraphCollection
related = None # related should be substituted during handler inheritance
@handle_errors
@validate
@serialize
def GET(self, obj_id):
"""Get deployment graphs list for given object.
:returns: JSONized object.
:http: * 200 (OK)
* 400 (invalid object data specified)
* 404 (object not found in db)
"""
related_model = self.get_object_or_404(self.related, int(obj_id))
graphs = self.collection.get_for_model(related_model)
return self.collection.to_list(graphs)
class DeploymentGraphHandler(SingleHandler):
"""Handler for fetching and deletion of the deployment graph."""
validator = validators.DeploymentGraphValidator
single = objects.DeploymentGraph
@handle_errors
@validate
def DELETE(self, obj_id):
"""Delete deployment graph.
:http: * 204 (OK)
* 404 (object not found in db)
"""
d_e = self.get_object_or_404(self.single, obj_id)
self.single.delete(d_e)
raise self.http(204)
def PATCH(self, obj_id):
return self.PUT(obj_id)
class DeploymentGraphCollectionHandler(CollectionHandler):
"""Handler for deployment graphs collection."""
collection = objects.DeploymentGraphCollection
@handle_errors
@validate
@serialize
def GET(self):
"""Get deployment graphs list with filtering.
:returns: JSONized object.
:http: * 200 (OK)
* 400 (invalid object data specified)
* 404 (object not found in db)
:http GET params:
* clusters_ids = comma separated list of clusters IDs
* plugins_ids = comma separated list of plugins IDs
* releases_ids = comma separated list of releases IDs
* graph_types = comma separated list of deployment graph types
* fetch_related = bool value (default false). When you are
specifying clusters list this flag allow to fetch not
only clusters own graphs but all graphs for given clusters
releases and enabled plugins to view the full picture.
"""
# process args
clusters_ids = self.get_param_as_set('clusters_ids')
if clusters_ids:
clusters_ids = self.checked_data(
validate_method=self.validator.validate_ids_list,
data=clusters_ids
)
plugins_ids = self.get_param_as_set('plugins_ids')
if plugins_ids:
plugins_ids = self.checked_data(
validate_method=self.validator.validate_ids_list,
data=self.get_param_as_set('plugins_ids')
)
releases_ids = self.get_param_as_set('releases_ids')
if releases_ids:
releases_ids = self.checked_data(
validate_method=self.validator.validate_ids_list,
data=self.get_param_as_set('releases_ids')
)
graph_types = self.get_param_as_set('graph_types')
fetch_related = utils.parse_bool(
web.input(fetch_related='0').fetch_related
)
# apply filtering
if clusters_ids or plugins_ids or releases_ids:
entities = [] # all objects for which related graphs is fetched
if clusters_ids:
entities.extend(
objects.ClusterCollection.filter_by_id_list(
None, clusters_ids
).all()
)
if plugins_ids:
entities.extend(
objects.PluginCollection.filter_by_id_list(
None, plugins_ids
).all()
)
if releases_ids:
entities.extend(
objects.ReleaseCollection.filter_by_id_list(
None, releases_ids
).all()
)
result = self.collection.get_related_graphs(
entities, graph_types, fetch_related
)
else:
if graph_types: # and no other filters
result = self.collection.filter_by_graph_types(graph_types)
else:
result = self.collection.all()
return self.collection.to_list(result)
class GraphsExecutorHandler(TransactionExecutorHandler):
"""Handler to execute sequence of deployment graphs."""
validator = validators.GraphExecuteParamsValidator
@handle_errors
def POST(self):
"""Execute graph(s) as single transaction.
:returns: JSONized Task object
:http: * 200 (task successfully executed)
* 202 (task scheduled for execution)
* 400 (data validation failed)
* 404 (cluster or sequence not found in db)
* 409 (graph execution is in progress)
"""
data = self.checked_data()
cluster = self.get_object_or_404(objects.Cluster, data.pop('cluster'))
return self.start_transaction(cluster, data)

View File

@ -1,93 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import csv
from StringIO import StringIO
import web
from nailgun.api.v1.handlers import base
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.deployment_history import \
DeploymentHistoryValidator
from nailgun import errors
from nailgun import objects
from nailgun import utils
class DeploymentHistoryCollectionHandler(base.CollectionHandler):
collection = objects.DeploymentHistoryCollection
validator = DeploymentHistoryValidator
@handle_errors
@validate
def GET(self, transaction_id):
""":returns: Collection of DeploymentHistory records.
:http: * 200 (OK)
* 400 (Bad tasks in given transaction)
* 404 (transaction not found in db, task not found in snapshot)
"""
# get transaction data
transaction = self.get_object_or_404(
objects.Transaction, transaction_id)
# process input parameters
nodes_ids = self.get_param_as_set('nodes')
statuses = self.get_param_as_set('statuses')
tasks_names = self.get_param_as_set('tasks_names')
include_summary = utils.parse_bool(
web.input(include_summary="0").include_summary)
try:
self.validator.validate_query(nodes_ids=nodes_ids,
statuses=statuses,
tasks_names=tasks_names)
except errors.ValidationException as exc:
raise self.http(400, exc.message)
# fetch and serialize history
data = self.collection.get_history(transaction=transaction,
nodes_ids=nodes_ids,
statuses=statuses,
tasks_names=tasks_names,
include_summary=include_summary)
if self.get_requested_mime() == 'text/csv':
return self.get_csv(data)
else:
return self.get_default(data)
@serialize
def get_default(self, data):
return data
def get_csv(self, data):
keys = ['task_name',
'node_id',
'status',
'type',
'time_start',
'time_end']
res = StringIO()
csv_writer = csv.writer(res)
csv_writer.writerow(keys)
for obj in data:
csv_writer.writerow([obj.get(k) for k in keys])
return res.getvalue()

View File

@ -1,125 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import web
from nailgun.api.v1.handlers.base import TransactionExecutorHandler
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import SingleHandler
from nailgun.api.v1.validators import deployment_sequence as validators
from nailgun import objects
class SequenceHandler(SingleHandler):
"""Handler for deployment graph related to model."""
validator = validators.SequenceValidator
single = objects.DeploymentSequence
@handle_errors
@serialize
def PUT(self, obj_id):
""":returns: JSONized REST object.
:http: * 200 (OK)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(self.single, obj_id)
data = self.checked_data(
self.validator.validate_update,
instance=obj
)
self.single.update(obj, data)
return self.single.to_dict(obj)
def PATCH(self, obj_id):
"""Update deployment sequence.
:param obj_id: the deployment sequence id
:returns: updated object
:http: * 200 (OK)
* 400 (invalid data specified)
* 404 (object not found in db)
"""
return self.PUT(obj_id)
class SequenceCollectionHandler(CollectionHandler):
"""Handler for deployment graphs related to the models collection."""
validator = validators.SequenceValidator
collection = objects.DeploymentSequenceCollection
@handle_errors
@serialize
def GET(self):
""":returns: Collection of JSONized Sequence objects by release.
:http: * 200 (OK)
:http: * 404 (Release or Cluster is not found)
"""
release = self._get_release()
if release:
return self.collection.get_for_release(release)
return self.collection.all()
def _get_release(self):
params = web.input(release=None, cluster=None)
if params.cluster:
return self.get_object_or_404(
objects.Cluster, id=params.cluster
).release
if params.release:
return self.get_object_or_404(
objects.Release, id=params.release
)
class SequenceExecutorHandler(TransactionExecutorHandler):
"""Handler to execute deployment sequence."""
validator = validators.SequenceExecutorValidator
@handle_errors
def POST(self, obj_id):
"""Execute sequence as single transaction.
:returns: JSONized Task object
:http: * 200 (task successfully executed)
* 202 (task scheduled for execution)
* 400 (data validation failed)
* 404 (cluster or sequence not found in db)
* 409 (graph execution is in progress)
"""
data = self.checked_data()
seq = self.get_object_or_404(objects.DeploymentSequence, id=obj_id)
cluster = self.get_object_or_404(objects.Cluster, data.pop('cluster'))
if cluster.release_id != seq.release_id:
raise self.http(
404,
"Sequence '{0}' is not found for cluster {1}"
.format(seq.name, cluster.name)
)
data['graphs'] = seq.graphs
return self.start_transaction(cluster, data)

View File

@ -1,30 +0,0 @@
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import serialize
from nailgun.extensions import get_all_extensions
class ExtensionHandler(BaseHandler):
"""Extension Handler"""
@serialize
def GET(self):
""":returns: JSONized list of available extensions.
:http: * 200 (OK)
"""
return [ext().to_dict() for ext in get_all_extensions()]

View File

@ -1,484 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handlers dealing with logs
"""
from itertools import dropwhile
import logging
import os
import re
import time
from oslo_serialization import jsonutils
import web
from nailgun import consts
from nailgun import objects
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.settings import settings
from nailgun.task.manager import DumpTaskManager
from nailgun.task.task import DumpTask
logger = logging.getLogger(__name__)
def read_backwards(file, from_byte=None, bufsize=0x20000):
cache_pos = file.tell()
file.seek(0, os.SEEK_END)
size = file.tell()
file.seek(cache_pos, os.SEEK_SET)
if size == 0:
return
if from_byte is None:
from_byte = size
lines = ['']
read_size = bufsize
rem = from_byte % bufsize
if rem == 0:
# Perform bufsize reads only
pos = max(0, (from_byte // bufsize - 1) * bufsize)
else:
# One more iteration will be done to read rem bytes so that we
# are aligned to exactly bufsize reads later on
read_size = rem
pos = (from_byte // bufsize) * bufsize
while pos >= 0:
file.seek(pos, os.SEEK_SET)
data = file.read(read_size) + lines[0]
lines = re.findall('[^\n]*\n?', data)
ix = len(lines) - 2
while ix > 0:
yield lines[ix]
ix -= 1
pos -= bufsize
read_size = bufsize
else:
yield lines[0]
# Set cursor position to last read byte
try:
file.seek(max(0, pos), os.SEEK_SET)
except IOError:
pass
# It turns out that strftime/strptime are costly functions in Python
# http://stackoverflow.com/questions/13468126/a-faster-strptime
# We don't call them if the log and UI date formats aren't very different
STRPTIME_PERFORMANCE_HACK = {}
if settings.UI_LOG_DATE_FORMAT == '%Y-%m-%d %H:%M:%S':
STRPTIME_PERFORMANCE_HACK = {
'%Y-%m-%dT%H:%M:%S': lambda date: date.replace('T', ' '),
'%Y-%m-%d %H:%M:%S': lambda date: date,
}
def read_log(
log_file=None,
level=None,
log_config={},
max_entries=None,
regexp=None,
from_byte=-1,
fetch_older=False,
to_byte=0,
**kwargs):
has_more = False
entries = []
log_date_format = log_config['date_format']
multiline = log_config.get('multiline', False)
skip_regexp = None
if 'skip_regexp' in log_config:
skip_regexp = re.compile(log_config['skip_regexp'])
allowed_levels = log_config['levels']
if level:
allowed_levels = list(dropwhile(lambda l: l != level,
log_config['levels']))
log_file_size = os.stat(log_file).st_size
if log_date_format in STRPTIME_PERFORMANCE_HACK:
strptime_function = STRPTIME_PERFORMANCE_HACK[log_date_format]
else:
strptime_function = lambda date: time.strftime(
settings.UI_LOG_DATE_FORMAT,
time.strptime(date, log_date_format)
)
with open(log_file, 'r') as f:
# we need to calculate current position manually instead of using
# tell() because read_backwards uses buffering
f.seek(0, os.SEEK_END)
pos = f.tell()
if from_byte != -1 and fetch_older:
pos = from_byte
multilinebuf = []
for line in read_backwards(f, from_byte=pos):
pos -= len(line)
if not fetch_older and pos < to_byte:
has_more = pos > 0
break
entry = line.rstrip('\n')
if not len(entry):
continue
if skip_regexp and skip_regexp.match(entry):
continue
m = regexp.match(entry)
if m is None:
if multiline:
# Add next multiline part to last entry if it exist.
multilinebuf.append(entry)
else:
logger.debug("Unable to parse log entry '%s' from %s",
entry, log_file)
continue
entry_text = m.group('text')
if len(multilinebuf):
multilinebuf.reverse()
entry_text += '\n' + '\n'.join(multilinebuf)
multilinebuf = []
entry_level = m.group('level').upper() or 'INFO'
if level and not (entry_level in allowed_levels):
continue
try:
entry_date = strptime_function(m.group('date'))
except ValueError:
logger.debug("Unable to parse date from log entry."
" Date format: %r, date part of entry: %r",
log_date_format,
m.group('date'))
continue
entries.append([
entry_date,
entry_level,
entry_text
])
if len(entries) >= max_entries:
has_more = True
break
if fetch_older or (not fetch_older and from_byte == -1):
from_byte = pos
if from_byte == 0:
has_more = False
return {
'entries': entries,
'from': from_byte,
'to': log_file_size,
'has_more': has_more,
}
class LogEntryCollectionHandler(BaseHandler):
"""Log entry collection handler"""
@handle_errors
@validate
@serialize
def GET(self):
"""Receives following parameters:
- *date_before* - get logs before this date
- *date_after* - get logs after this date
- *source* - source of logs
- *node* - node id (for getting node logs)
- *level* - log level (all levels showed by default)
- *to* - number of entries
- *max_entries* - max number of entries to load
:returns: Collection of log entries, log file size
and if there are new entries.
:http:
* 200 (OK)
* 400 (invalid *date_before* value)
* 400 (invalid *date_after* value)
* 400 (invalid *source* value)
* 400 (invalid *node* value)
* 400 (invalid *level* value)
* 400 (invalid *to* value)
* 400 (invalid *max_entries* value)
* 404 (log file not found)
* 404 (log files dir not found)
* 404 (node not found)
* 500 (node has no assigned ip)
* 500 (invalid regular expression in config)
"""
data = self.read_and_validate_data()
log_file = data['log_file']
fetch_older = data['fetch_older']
from_byte = data['from_byte']
to_byte = data['to_byte']
log_file_size = os.stat(log_file).st_size
if (not fetch_older and to_byte >= log_file_size) or \
(fetch_older and from_byte == 0):
return jsonutils.dumps({
'entries': [],
'from': from_byte,
'to': log_file_size,
'has_more': False,
})
return read_log(**data)
def read_and_validate_data(self):
user_data = web.input()
if not user_data.get('source'):
logger.debug("'source' must be specified")
raise self.http(400, "'source' must be specified")
try:
max_entries = int(user_data.get('max_entries',
settings.TRUNCATE_LOG_ENTRIES))
except ValueError:
logger.debug("Invalid 'max_entries' value: %r",
user_data.get('max_entries'))
raise self.http(400, "Invalid 'max_entries' value")
from_byte = None
try:
from_byte = int(user_data.get('from', -1))
except ValueError:
logger.debug("Invalid 'from' value: %r", user_data.get('from'))
raise self.http(400, "Invalid 'from' value")
to_byte = None
try:
to_byte = int(user_data.get('to', 0))
except ValueError:
logger.debug("Invalid 'to' value: %r", user_data.get('to'))
raise self.http(400, "Invalid 'to' value")
fetch_older = 'fetch_older' in user_data and \
user_data['fetch_older'].lower() in ('1', 'true')
date_before = user_data.get('date_before')
if date_before:
try:
date_before = time.strptime(date_before,
settings.UI_LOG_DATE_FORMAT)
except ValueError:
logger.debug("Invalid 'date_before' value: %r", date_before)
raise self.http(400, "Invalid 'date_before' value")
date_after = user_data.get('date_after')
if date_after:
try:
date_after = time.strptime(date_after,
settings.UI_LOG_DATE_FORMAT)
except ValueError:
logger.debug("Invalid 'date_after' value: %r", date_after)
raise self.http(400, "Invalid 'date_after' value")
log_config = filter(lambda lc: lc['id'] == user_data.get('source'),
settings.LOGS)
# If log source not found or it is fake source but we are run without
# fake tasks.
if not log_config or (log_config[0].get('fake') and
not settings.FAKE_TASKS):
logger.debug("Log source %r not found", user_data.get('source'))
raise self.http(404, "Log source not found")
log_config = log_config[0]
# If it is 'remote' and not 'fake' log source then calculate log file
# path by base dir, node IP and relative path to file.
# Otherwise return absolute path.
node = None
if log_config['remote'] and not log_config.get('fake'):
if not user_data.get('node'):
raise self.http(400, "'node' must be specified")
try:
node_id = int(user_data.get('node'))
except ValueError:
logger.debug("Invalid 'node' value: %r", user_data.get('node'))
raise self.http(400, "Invalid 'node' value")
node = objects.Node.get_by_uid(node_id)
if not node:
raise self.http(404, "Node not found")
if not node.ip:
logger.error('Node %r has no assigned ip', node.id)
raise self.http(500, "Node has no assigned ip")
if node.status == consts.NODE_STATUSES.discover:
ndir = node.ip
else:
ndir = objects.Node.get_node_fqdn(node)
remote_log_dir = os.path.join(log_config['base'], ndir)
if not os.path.exists(remote_log_dir):
logger.debug("Log files dir %r for node %s not found",
remote_log_dir, node.id)
raise self.http(404, "Log files dir for node not found")
log_file = os.path.join(remote_log_dir, log_config['path'])
else:
log_file = log_config['path']
if not os.path.exists(log_file):
if node:
logger.debug("Log file %r for node %s not found",
log_file, node.id)
else:
logger.debug("Log file %r not found", log_file)
raise self.http(404, "Log file not found")
level = user_data.get('level')
if level is not None and level not in log_config['levels']:
raise self.http(400, "Invalid level")
try:
regexp = re.compile(log_config['regexp'])
except re.error:
logger.exception('Invalid regular expression for file %r',
log_config['id'])
raise self.http(500, "Invalid regular expression in config")
if 'skip_regexp' in log_config:
try:
re.compile(log_config['skip_regexp'])
except re.error:
logger.exception('Invalid regular expression for file %r',
log_config['id'])
raise self.http(500, "Invalid regular expression in config")
return {
'date_after': date_after,
'date_before': date_before,
'level': level,
'log_file': log_file,
'log_config': log_config,
'max_entries': max_entries,
'node': node,
'regexp': regexp,
'fetch_older': fetch_older,
'from_byte': from_byte,
'to_byte': to_byte,
}
class LogPackageHandler(BaseHandler):
"""Log package handler"""
@handle_errors
@validate
def PUT(self):
""":returns: JSONized Task object.
:http: * 200 (task successfully executed)
* 400 (data validation failed)
* 404 (cluster not found in db)
"""
try:
conf = jsonutils.loads(web.data()) if web.data() else None
task_manager = DumpTaskManager()
task = task_manager.execute(
conf=conf,
auth_token=web.ctx.env.get('HTTP_X_AUTH_TOKEN'))
except Exception as exc:
logger.warn(u'DumpTask: error while execution '
'dump environment task: {0}'.format(str(exc)))
raise self.http(400, str(exc))
self.raise_task(task)
class LogPackageDefaultConfig(BaseHandler):
@handle_errors
@validate
@serialize
def GET(self):
"""Generates default config for snapshot
:http: * 200
"""
return DumpTask.conf()
class LogSourceCollectionHandler(BaseHandler):
"""Log source collection handler"""
@serialize
def GET(self):
""":returns: Collection of log sources (from settings)
:http: * 200 (OK)
"""
return settings.LOGS
class SnapshotDownloadHandler(BaseHandler):
def GET(self, snapshot_name):
""":returns: empty response
:resheader X-Accel-Redirect: snapshot_name
:http: * 200 (OK)
* 401 (Unauthorized)
* 404 (Snapshot with given name does not exist)
"""
web.header('X-Accel-Redirect', '/dump/' + snapshot_name)
return ''
class LogSourceByNodeCollectionHandler(BaseHandler):
"""Log source by node collection handler"""
@handle_errors
@validate
@serialize
def GET(self, node_id):
""":returns: Collection of log sources by node (from settings)
:http: * 200 (OK)
* 404 (node not found in db)
"""
node = self.get_object_or_404(objects.Node, node_id)
def getpath(x):
if x.get('fake'):
if settings.FAKE_TASKS:
return x['path']
else:
return ''
else:
if node.status == consts.NODE_STATUSES.discover:
ndir = node.ip
else:
ndir = objects.Node.get_node_fqdn(node)
return os.path.join(x['base'], ndir, x['path'])
f = lambda x: (
x.get('remote') and x.get('path') and x.get('base') and
os.access(getpath(x), os.R_OK) and os.path.isfile(getpath(x))
)
sources = filter(f, settings.LOGS)
return sources

View File

@ -1,85 +0,0 @@
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_serialization import jsonutils
from nailgun.api.v1.handlers.base import DBSingletonHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.master_node_settings \
import MasterNodeSettingsValidator
from nailgun.logger import logger
from nailgun import objects
from nailgun.task.manager import CreateStatsUserTaskManager
from nailgun.task.manager import RemoveStatsUserTaskManager
class MasterNodeSettingsHandler(DBSingletonHandler):
single = objects.MasterNodeSettings
validator = MasterNodeSettingsValidator
not_found_error = "Settings are not found in DB"
def _handle_stats_opt_in(self, settings_data=None):
"""Starts task on stats user creation or removal
:param settings_data: dict with master node settings.
Current data from DB will be used if master_node_settings_data is None
"""
must_send = self.single.must_send_stats(
master_node_settings_data=settings_data)
if must_send:
logger.debug("Handling customer opt-in to sending statistics")
manager = CreateStatsUserTaskManager()
else:
logger.debug("Handling customer opt-out to sending statistics")
manager = RemoveStatsUserTaskManager()
try:
manager.execute()
except Exception:
logger.exception("Stats user operation failed")
def _get_new_opt_in_status(self):
"""Extracts opt in status from request
Returns None if no opt in status in the request
:return: bool or None
"""
data = self.checked_data(self.validator.validate_update)
return data.get('settings', {}).get('statistics', {}).\
get('send_anonymous_statistic', {}).get('value')
def _perform_update(self, http_method):
old_opt_in = self.single.must_send_stats()
new_opt_in = self._get_new_opt_in_status()
result = http_method()
if new_opt_in is not None and old_opt_in != new_opt_in:
self._handle_stats_opt_in(settings_data=jsonutils.loads(result))
return result
@handle_errors
@validate
def PUT(self):
return self._perform_update(
super(MasterNodeSettingsHandler, self).PUT)
@handle_errors
@validate
def PATCH(self):
return self._perform_update(
super(MasterNodeSettingsHandler, self).PATCH)

View File

@ -1,315 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handlers dealing with nodes
"""
from datetime import datetime
import web
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import SingleHandler
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators import node as node_validators
from nailgun import errors
from nailgun import objects
from nailgun.db import db
from nailgun.db.sqlalchemy.models import Node
from nailgun.task.manager import NodeDeletionTaskManager
from nailgun.logger import logger
from nailgun import notifier
class NodeHandler(SingleHandler):
single = objects.Node
validator = node_validators.NodeValidator
@handle_errors
@validate
@serialize
def PUT(self, obj_id):
""":returns: JSONized Node object.
:http: * 200 (OK)
* 400 (error occured while processing of data)
* 404 (Node not found in db)
"""
obj = self.get_object_or_404(self.single, obj_id)
data = self.checked_data(
self.validator.validate_update,
instance=obj
)
# NOTE(aroma):if node is being assigned to the cluster, and if network
# template has been set for the cluster, network template will
# also be applied to node; in such case relevant errors might
# occur so they must be handled in order to form proper HTTP
# response for user
try:
self.single.update(obj, data)
except errors.NetworkTemplateCannotBeApplied as exc:
raise self.http(400, exc.message)
return self.single.to_dict(obj)
@handle_errors
@validate
def DELETE(self, obj_id):
"""Deletes a node from DB and from Cobbler.
:return: JSON-ed deletion task
:http: * 200 (node has been succesfully deleted)
* 202 (node is successfully scheduled for deletion)
* 400 (data validation failed)
* 404 (node not found in db)
* 403 (one of the controllers is in error state or task can't
be started due to already running tasks)
"""
node = self.get_object_or_404(self.single, obj_id)
task_manager = NodeDeletionTaskManager(cluster_id=node.cluster_id)
try:
task = task_manager.execute([node], mclient_remove=False)
except (errors.TaskAlreadyRunning,
errors.ControllerInErrorState) as e:
raise self.http(403, e.message)
self.raise_task(task)
class NodeCollectionHandler(CollectionHandler):
"""Node collection handler"""
validator = node_validators.NodeValidator
collection = objects.NodeCollection
@handle_errors
@validate
@serialize
def GET(self):
"""May receive cluster_id parameter to filter list of nodes
:returns: Collection of JSONized Node objects.
:http: * 200 (OK)
"""
cluster_id = web.input(cluster_id=None).cluster_id
nodes = self.collection.eager_nodes_handlers(None)
if cluster_id == '':
nodes = nodes.filter_by(cluster_id=None)
elif cluster_id:
nodes = nodes.filter_by(cluster_id=cluster_id)
return self.collection.to_list(nodes)
@handle_errors
@validate
@serialize
def PUT(self):
""":returns: Collection of JSONized Node objects.
:http: * 200 (nodes are successfully updated)
* 400 (data validation failed)
"""
data = self.checked_data(
self.validator.validate_collection_update
)
nodes_updated = []
for nd in data:
node = self.collection.single.get_by_meta(nd)
if not node:
raise self.http(404, "Can't find node: {0}".format(nd))
try:
self.collection.single.update(node, nd)
except errors.NetworkTemplateCannotBeApplied as exc:
raise self.http(400, exc.message)
nodes_updated.append(node.id)
# we need eagerload everything that is used in render
nodes = self.collection.filter_by_id_list(
self.collection.eager_nodes_handlers(None),
nodes_updated
)
return self.collection.to_list(nodes)
@handle_errors
@validate
def DELETE(self):
"""Deletes a batch of nodes.
Takes (JSONed) list of node ids to delete.
:return: JSON-ed deletion task
:http: * 200 (nodes have been succesfully deleted)
* 202 (nodes are successfully scheduled for deletion)
* 400 (data validation failed)
* 404 (nodes not found in db)
* 403 (one of the controllers is in error state or task can't
be started due to already running tasks)
"""
# TODO(pkaminski): web.py does not support parsing of array arguments
# in the queryset so we specify the input as comma-separated list
node_ids = self.get_param_as_set('ids', default=[])
node_ids = self.checked_data(
validate_method=self.validator.validate_ids_list,
data=node_ids
)
nodes = self.get_objects_list_or_404(self.collection, node_ids)
task_manager = NodeDeletionTaskManager(cluster_id=nodes[0].cluster_id)
# NOTE(aroma): ditto as in comments for NodeHandler's PUT method;
try:
task = task_manager.execute(nodes, mclient_remove=False)
except (errors.TaskAlreadyRunning,
errors.ControllerInErrorState) as e:
raise self.http(403, e.message)
self.raise_task(task)
class NodeAgentHandler(BaseHandler):
collection = objects.NodeCollection
validator = node_validators.NodeValidator
@handle_errors
@validate
@serialize
def PUT(self):
""":returns: node id.
:http: * 200 (node are successfully updated)
* 304 (node data not changed since last request)
* 400 (data validation failed)
* 404 (node not found)
"""
nd = self.checked_data(
self.validator.validate_update,
data=web.data())
node = self.collection.single.get_by_meta(nd)
if not node:
raise self.http(404, "Can't find node: {0}".format(nd))
node.timestamp = datetime.now()
if not node.online:
node.online = True
msg = u"Node '{0}' is back online".format(node.human_readable_name)
logger.info(msg)
notifier.notify("discover", msg, node_id=node.id)
db().flush()
if 'agent_checksum' in nd and (
node.agent_checksum == nd['agent_checksum']
):
return {'id': node.id, 'cached': True}
nd['is_agent'] = True
self.collection.single.update_by_agent(node, nd)
return {"id": node.id}
class NodesAllocationStatsHandler(BaseHandler):
"""Node allocation stats handler"""
@handle_errors
@validate
@serialize
def GET(self):
""":returns: Total and unallocated nodes count.
:http: * 200 (OK)
"""
unallocated_nodes = db().query(Node).filter_by(cluster_id=None).count()
total_nodes = \
db().query(Node).count()
return {'total': total_nodes,
'unallocated': unallocated_nodes}
class NodeAttributesHandler(BaseHandler):
"""Node attributes handler"""
validator = node_validators.NodeAttributesValidator
@handle_errors
@validate
@serialize
def GET(self, node_id):
""":returns: JSONized Node attributes.
:http: * 200 (OK)
* 404 (node not found in db)
"""
node = self.get_object_or_404(objects.Node, node_id)
return objects.Node.get_attributes(node)
@handle_errors
@validate
@serialize
def PUT(self, node_id):
""":returns: JSONized Node attributes.
:http: * 200 (OK)
* 400 (wrong attributes data specified)
* 404 (node not found in db)
"""
node = self.get_object_or_404(objects.Node, node_id)
if not node.cluster:
raise self.http(400, "Node '{}' doesn't belong to any cluster"
.format(node.id))
data = self.checked_data(node=node, cluster=node.cluster)
objects.Node.update_attributes(node, data)
return objects.Node.get_attributes(node)
class NodeAttributesDefaultsHandler(BaseHandler):
"""Node default attributes handler"""
@handle_errors
@validate
@serialize
def GET(self, node_id):
""":returns: JSONized Node default attributes.
:http: * 200 (OK)
* 404 (node not found in db)
"""
node = self.get_object_or_404(objects.Node, node_id)
return objects.Node.get_default_attributes(node)

View File

@ -1,91 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handlers dealing with node groups
"""
import web
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import SingleHandler
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.node_group import NodeGroupValidator
from nailgun import errors
from nailgun import objects
class NodeGroupHandler(SingleHandler):
"""NodeGroup single handler"""
single = objects.NodeGroup
validator = NodeGroupValidator
@handle_errors
@validate
def DELETE(self, group_id):
""":returns: {}
:http: * 204 (object successfully deleted)
* 400 (data validation or some of tasks failed)
* 404 (nodegroup not found in db)
* 409 (previous dsnmasq setup is not finished yet)
"""
node_group = self.get_object_or_404(objects.NodeGroup, group_id)
self.checked_data(
self.validator.validate_delete,
instance=node_group
)
try:
self.single.delete(node_group)
except errors.TaskAlreadyRunning as exc:
raise self.http(409, exc.message)
except Exception as exc:
raise self.http(400, exc.message)
raise self.http(204)
class NodeGroupCollectionHandler(CollectionHandler):
"""NodeGroup collection handler"""
collection = objects.NodeGroupCollection
validator = NodeGroupValidator
@handle_errors
@validate
@serialize
def GET(self):
"""May receive cluster_id parameter to filter list of groups
:returns: Collection of JSONized Task objects.
:http: * 200 (OK)
* 404 (task not found in db)
"""
user_data = web.input(cluster_id=None)
if user_data.cluster_id is not None:
return self.collection.to_list(
self.collection.get_by_cluster_id(
user_data.cluster_id
)
)
else:
return self.collection.to_list()

View File

@ -1,109 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handlers dealing with notifications
"""
import web
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import SingleHandler
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.notification import NotificationValidator
from nailgun import objects
class NotificationHandler(SingleHandler):
"""Notification single handler"""
single = objects.Notification
validator = NotificationValidator
class NotificationCollectionHandler(CollectionHandler):
collection = objects.NotificationCollection
validator = NotificationValidator
@handle_errors
@validate
@serialize
def PUT(self):
""":returns: Collection of JSONized Notification objects.
:http: * 200 (OK)
* 400 (invalid data specified for collection update)
"""
data = self.validator.validate_collection_update(web.data())
notifications_updated = []
for nd in data:
notif = self.collection.single.get_by_uid(nd["id"])
self.collection.single.update(notif, nd)
notifications_updated.append(notif)
return self.collection.to_list(notifications_updated)
class NotificationCollectionStatsHandler(CollectionHandler):
collection = objects.NotificationCollection
validator = NotificationValidator
@handle_errors
@validate
@serialize
def GET(self):
"""Calculates notifications statuses
Counts all presented notifications in the DB and returns dict
with structure {'total': count, 'unread': count, ...}
:returns: dict with notifications statuses count
:http: * 200 (OK)
"""
return self.collection.single.get_statuses_with_count()
@handle_errors
@validate
def POST(self):
"""Update notification statuses is not allowed
:http: * 405 (Method not allowed)
"""
raise self.http(405)
class NotificationStatusHandler(BaseHandler):
validator = NotificationValidator
@handle_errors
@validate
@serialize
def PUT(self):
"""Updates status of all notifications
:http: * 200 (OK)
* 400 (Invalid data)
"""
web_data = web.data()
data = self.validator.validate_change_status(web_data)
status = data['status']
objects.NotificationCollection.update_statuses(status)

View File

@ -1,147 +0,0 @@
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import traceback
import six
import web
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import SingleHandler
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.openstack_config import OpenstackConfigValidator
from nailgun import errors
from nailgun.logger import logger
from nailgun import objects
from nailgun.task.manager import OpenstackConfigTaskManager
class OpenstackConfigCollectionHandler(BaseHandler):
validator = OpenstackConfigValidator
@handle_errors
@validate
@serialize
def GET(self):
"""Returns list of filtered config objects.
:http: * 200 (OK)
* 400 (Invalid query specified)
:return: List of config objects in JSON format.
"""
data = self.checked_data(
self.validator.validate_query, data=web.input())
node_ids = data.pop('node_ids', None)
configs = objects.OpenstackConfigCollection.filter_by(None, **data)
if node_ids:
configs = objects.OpenstackConfigCollection.filter_by_list(
configs, 'node_id', node_ids)
return objects.OpenstackConfigCollection.to_list(configs)
@handle_errors
@validate
def POST(self):
"""Creates new config object.
If config object with specified parameters exists, it is replaced
with a new config object. Previous object is marked as inactive.
It can be retrieved to track the history of configuration changes.
:http: * 201 (Object successfully created)
* 400 (Invalid query specified)
* 404 (Object dependencies not found)
:reutrn: New config object in JSON format.
"""
data = self.checked_data()
configs = objects.OpenstackConfigCollection.create(data)
raise self.http(
201, objects.OpenstackConfigCollection.to_json(configs))
class OpenstackConfigHandler(SingleHandler):
single = objects.OpenstackConfig
validator = OpenstackConfigValidator
@handle_errors
@validate
def PUT(self, obj_id):
"""Update an existing configuration is not allowed
:http: * 405 (Method not allowed)
"""
raise self.http(405)
@handle_errors
@validate
def DELETE(self, obj_id):
""":returns: Empty string
:http: * 204 (object successfully deleted)
* 400 (object is already deleted)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(
self.single,
obj_id
)
self.checked_data(
self.validator.validate_delete,
instance=obj
)
try:
self.single.disable(obj)
except errors.CannotUpdate as exc:
raise self.http(400, exc.message)
raise self.http(204)
class OpenstackConfigExecuteHandler(BaseHandler):
validator = OpenstackConfigValidator
task_manager = OpenstackConfigTaskManager
@handle_errors
@validate
def PUT(self):
"""Executes update tasks for specified resources.
:http: * 200 (OK)
* 202 (Accepted)
* 400 (Invalid data)
* 404 (Object dependencies not found)
"""
graph_type = web.input(graph_type=None).graph_type or None
filters = self.checked_data(self.validator.validate_execute)
cluster = self.get_object_or_404(
objects.Cluster, filters['cluster_id'])
# Execute upload task for nodes
task_manager = self.task_manager(cluster_id=cluster.id)
try:
task = task_manager.execute(filters, graph_type=graph_type)
except Exception as exc:
logger.warn(
u'Cannot execute %s task nodes: %s',
self.task_manager.__name__, traceback.format_exc())
raise self.http(400, six.text_type(exc))
self.raise_task(task)

View File

@ -1,564 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
import web
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import TransactionExecutorHandler
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.cluster import ProvisionSelectedNodesValidator
from nailgun.api.v1.validators.node import DeploySelectedNodesValidator
from nailgun.api.v1.validators.node import NodeDeploymentValidator
from nailgun.api.v1.validators.node import NodesFilterValidator
from nailgun.api.v1.validators.orchestrator_graph import \
GraphSolverVisualizationValidator
from nailgun.logger import logger
from nailgun import consts
from nailgun import errors
from nailgun import objects
from nailgun import utils
from nailgun.orchestrator import deployment_serializers
from nailgun.orchestrator import graph_visualization
from nailgun.orchestrator import orchestrator_graph
from nailgun.orchestrator import provisioning_serializers
from nailgun.orchestrator.stages import post_deployment_serialize
from nailgun.orchestrator.stages import pre_deployment_serialize
from nailgun.orchestrator import task_based_deployment
from nailgun.task.helpers import TaskHelper
from nailgun.task import manager
from nailgun.task import task
class NodesFilterMixin(object):
validator = NodesFilterValidator
def get_default_nodes(self, cluster):
"""Method should be overriden and return list of nodes"""
raise NotImplementedError('Please Implement this method')
def get_nodes(self, cluster):
"""If nodes selected in filter then return them
else return default nodes
"""
nodes = self.get_param_as_set('nodes', default=[])
if not nodes:
return self.get_default_nodes(cluster) or []
node_ids = self.checked_data(data=nodes)
nodes_obj = self.get_objects_list_or_404(
objects.NodeCollection,
node_ids
)
self.checked_data(self.validator.validate_placement,
data=nodes_obj, cluster=cluster)
return nodes_obj
class DefaultOrchestratorInfo(NodesFilterMixin, BaseHandler):
"""Base class for default orchestrator data
Need to redefine serializer variable
"""
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
""":returns: JSONized default data which will be passed to orchestrator
:http: * 200 (OK)
* 400 (some nodes belong to different cluster or not assigned)
* 404 (cluster not found in db)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
nodes = self.get_nodes(cluster)
return self._serialize(cluster, nodes)
def _serialize(self, cluster, nodes):
raise NotImplementedError('Override the method')
def get_default_nodes(self, cluster):
return objects.Cluster.get_nodes_not_for_deletion(cluster)
class OrchestratorInfo(BaseHandler):
"""Base class for replaced data."""
def get_orchestrator_info(self, cluster):
"""Method should return data which will be passed to orchestrator"""
raise NotImplementedError('Please Implement this method')
def update_orchestrator_info(self, cluster, data):
"""Method should override data which will be passed to orchestrator"""
raise NotImplementedError('Please Implement this method')
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
""":returns: JSONized data which will be passed to orchestrator
:http: * 200 (OK)
* 404 (cluster not found in db)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
return self.get_orchestrator_info(cluster)
@handle_errors
@validate
@serialize
def PUT(self, cluster_id):
""":returns: JSONized data which will be passed to orchestrator
:http: * 200 (OK)
* 400 (wrong data specified)
* 404 (cluster not found in db)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
data = self.checked_data()
self.update_orchestrator_info(cluster, data)
logger.debug('OrchestratorInfo:'
' facts for cluster_id {0} were uploaded'
.format(cluster_id))
return data
@handle_errors
@validate
def DELETE(self, cluster_id):
""":returns: {}
:http: * 202 (orchestrator data deletion process launched)
* 400 (failed to execute orchestrator data deletion process)
* 404 (cluster not found in db)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
self.update_orchestrator_info(cluster, {})
raise self.http(202, '{}')
class DefaultProvisioningInfo(DefaultOrchestratorInfo):
def _serialize(self, cluster, nodes):
return provisioning_serializers.serialize(
cluster, nodes, ignore_customized=True)
class DefaultDeploymentInfo(DefaultOrchestratorInfo):
def _serialize(self, cluster, nodes):
if objects.Release.is_lcm_supported(cluster.release):
serialized = deployment_serializers.serialize_for_lcm(
cluster, nodes, ignore_customized=True
)
else:
graph = orchestrator_graph.AstuteGraph(cluster)
serialized = deployment_serializers.serialize(
graph, cluster, nodes, ignore_customized=True)
return _deployment_info_in_compatible_format(
serialized, utils.parse_bool(web.input(split='0').split)
)
class DefaultPrePluginsHooksInfo(DefaultOrchestratorInfo):
def _serialize(self, cluster, nodes):
if objects.Release.is_lcm_supported(cluster.release):
raise self.http(
405, msg="The plugin hooks are not supported anymore."
)
graph = orchestrator_graph.AstuteGraph(cluster)
return pre_deployment_serialize(graph, cluster, nodes)
class DefaultPostPluginsHooksInfo(DefaultOrchestratorInfo):
def _serialize(self, cluster, nodes):
if objects.Release.is_lcm_supported(cluster.release):
raise self.http(
405, msg="The plugin hooks are not supported anymore."
)
graph = orchestrator_graph.AstuteGraph(cluster)
return post_deployment_serialize(graph, cluster, nodes)
class ProvisioningInfo(OrchestratorInfo):
def get_orchestrator_info(self, cluster):
return objects.Cluster.get_provisioning_info(cluster)
def update_orchestrator_info(self, cluster, data):
return objects.Cluster.replace_provisioning_info(cluster, data)
class DeploymentInfo(OrchestratorInfo):
def get_orchestrator_info(self, cluster):
return _deployment_info_in_compatible_format(
objects.Cluster.get_deployment_info(cluster),
utils.parse_bool(web.input(split='0').split)
)
def update_orchestrator_info(self, cluster, data):
if isinstance(data, list):
# FIXME(bgaifullin) need to update fuelclient
# use uid common to determine cluster attributes
nodes = {n['uid']: n for n in data if 'uid' in n}
custom_info = {
'common': nodes.pop('common', {}),
'nodes': nodes
}
new_format = False
else:
custom_info = data
new_format = True
return _deployment_info_in_compatible_format(
objects.Cluster.replace_deployment_info(cluster, custom_info),
new_format
)
class RunMixin(object):
"""Provides dry_run or noop_run parameters."""
def get_dry_run(self):
return utils.parse_bool(web.input(dry_run='0').dry_run)
def get_noop_run(self):
return utils.parse_bool(web.input(noop_run='0').noop_run)
class SelectedNodesBase(NodesFilterMixin, TransactionExecutorHandler):
"""Base class for running task manager on selected nodes."""
graph_type = None
def get_transaction_options(self, cluster, options):
if not objects.Release.is_lcm_supported(cluster.release):
# this code is actual only for lcm
return
graph_type = options.get('graph_type') or self.graph_type
graph = graph_type and objects.Cluster.get_deployment_graph(
cluster, graph_type
)
if not graph or not graph['tasks']:
return
nodes_ids = self.get_param_as_set('nodes', default=None)
if nodes_ids is not None:
nodes = self.get_objects_list_or_404(
objects.NodeCollection, nodes_ids
)
nodes_ids = [n.id for n in nodes]
if graph:
return {
'noop_run': options.get('noop_run'),
'dry_run': options.get('dry_run'),
'force': options.get('force'),
'graphs': [{
'type': graph['type'],
'nodes': nodes_ids,
'tasks': options.get('deployment_tasks')
}]
}
def handle_task(self, cluster, **kwargs):
if objects.Release.is_lcm_supported(cluster.release):
# this code is actual only if cluster is LCM ready
try:
transaction_options = self.get_transaction_options(
cluster, kwargs
)
except errors.NailgunException as e:
logger.exception("Failed to get transaction options.")
raise self.http(400, six.text_type(e))
if transaction_options:
return self.start_transaction(cluster, transaction_options)
nodes = self.get_nodes(cluster)
try:
task_manager = self.task_manager(cluster_id=cluster.id)
task = task_manager.execute(nodes, **kwargs)
except Exception as exc:
logger.exception(
u'Cannot execute %s task nodes: %s',
self.task_manager.__name__, ','.join(n.uid for n in nodes)
)
raise self.http(400, msg=six.text_type(exc))
self.raise_task(task)
@handle_errors
@validate
def PUT(self, cluster_id):
""":returns: JSONized Task object.
:http: * 200 (task successfully executed)
* 202 (task scheduled for execution)
* 400 (data validation failed)
* 404 (cluster or nodes not found in db)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
return self.handle_task(cluster)
class ProvisionSelectedNodes(SelectedNodesBase):
"""Handler for provisioning selected nodes."""
validator = ProvisionSelectedNodesValidator
task_manager = manager.ProvisioningTaskManager
graph_type = 'provision'
def get_default_nodes(self, cluster):
return TaskHelper.nodes_to_provision(cluster)
@handle_errors
@validate
def PUT(self, cluster_id):
""":returns: JSONized Task object.
:http: * 200 (task successfully executed)
* 202 (task scheduled for execution)
* 400 (data validation failed)
* 404 (cluster or nodes not found in db)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
# actually, there is no data in http body. the only reason why
# we use it here is to follow dry rule and do not convert exceptions
# into http status codes again.
self.checked_data(self.validator.validate_provision, cluster=cluster)
return self.handle_task(cluster)
class BaseDeploySelectedNodes(SelectedNodesBase):
validator = DeploySelectedNodesValidator
task_manager = manager.DeploymentTaskManager
graph_type = consts.DEFAULT_DEPLOYMENT_GRAPH_TYPE
def get_default_nodes(self, cluster):
return TaskHelper.nodes_to_deploy(cluster)
def get_graph_type(self):
return web.input(graph_type=None).graph_type or None
def get_force(self):
return utils.parse_bool(web.input(force='0').force)
def get_nodes(self, cluster):
nodes_to_deploy = super(
BaseDeploySelectedNodes, self).get_nodes(cluster)
self.validate(cluster, nodes_to_deploy, self.get_graph_type())
return nodes_to_deploy
def validate(self, cluster, nodes_to_deploy, graph_type=None):
self.checked_data(self.validator.validate_nodes_to_deploy,
nodes=nodes_to_deploy, cluster_id=cluster.id)
self.checked_data(self.validator.validate_release, cluster=cluster,
graph_type=graph_type)
class DeploySelectedNodes(BaseDeploySelectedNodes, RunMixin):
"""Handler for deployment selected nodes."""
@handle_errors
@validate
def PUT(self, cluster_id):
""":returns: JSONized Task object.
:http: * 200 (task successfully executed)
* 202 (task scheduled for execution)
* 400 (data validation failed)
* 404 (cluster or nodes not found in db)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
return self.handle_task(
cluster=cluster,
graph_type=self.get_graph_type(),
dry_run=self.get_dry_run(),
noop_run=self.get_noop_run(),
force=self.get_force()
)
class DeploySelectedNodesWithTasks(BaseDeploySelectedNodes, RunMixin):
validator = NodeDeploymentValidator
@handle_errors
@validate
def PUT(self, cluster_id):
""":returns: JSONized Task object.
:http: * 200 (task successfully executed)
* 202 (task scheduled for execution)
* 400 (data validation failed)
* 404 (cluster or nodes not found in db)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
data = self.checked_data(
self.validator.validate_deployment,
cluster=cluster,
graph_type=self.get_graph_type())
return self.handle_task(
cluster,
deployment_tasks=data,
graph_type=self.get_graph_type(),
dry_run=self.get_dry_run(),
noop_run=self.get_noop_run(),
force=self.get_force()
)
class TaskDeployGraph(BaseHandler):
validator = GraphSolverVisualizationValidator
@handle_errors
@validate
def GET(self, cluster_id):
""":returns: DOT representation of deployment graph.
:http: * 200 (graph returned)
* 404 (cluster not found in db)
* 400 (failed to get graph)
"""
web.header('Content-Type', 'text/vnd.graphviz', unique=True)
graph_type = web.input(graph_type=None).graph_type or None
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
tasks = objects.Cluster.get_deployment_tasks(cluster, graph_type)
graph = orchestrator_graph.GraphSolver(tasks)
tasks = self.get_param_as_set('tasks', default=[])
parents_for = web.input(parents_for=None).parents_for
remove = self.get_param_as_set('remove')
if tasks:
tasks = self.checked_data(
self.validator.validate,
data=list(tasks),
cluster=cluster,
graph_type=graph_type)
logger.debug('Tasks used in dot graph %s', tasks)
if parents_for:
parents_for = self.checked_data(
self.validator.validate_task_presence,
data=parents_for,
graph=graph)
logger.debug('Graph with predecessors for %s', parents_for)
if remove:
remove = list(remove)
remove = self.checked_data(
self.validator.validate_tasks_types,
data=remove)
logger.debug('Types to remove %s', remove)
visualization = graph_visualization.GraphVisualization(graph)
dotgraph = visualization.get_dotgraph(tasks=tasks,
parents_for=parents_for,
remove=remove)
return dotgraph.to_string()
class SerializedTasksHandler(NodesFilterMixin, BaseHandler):
def get_default_nodes(self, cluster):
if objects.Release.is_lcm_supported(cluster.release):
return objects.Cluster.get_nodes_not_for_deletion(cluster).all()
return TaskHelper.nodes_to_deploy(cluster)
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
""":returns: serialized tasks in json format
:http: * 200 (serialized tasks returned)
* 400 (task based deployment is not allowed for cluster)
* 400 (some nodes belong to different cluster or not assigned)
* 404 (cluster is not found)
* 404 (nodes are not found)
"""
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
nodes = self.get_nodes(cluster)
graph_type = web.input(graph_type=None).graph_type or None
task_ids = self.get_param_as_set('tasks')
try:
if objects.Release.is_lcm_supported(cluster.release):
# in order to do not repeat quite complex logic, we create
# a temporary task (transaction) instance and pass it to
# task_deploy serializer.
transaction = objects.Transaction.model(
name=consts.TASK_NAMES.deployment, cluster=cluster)
rv = task.ClusterTransaction.task_deploy(
transaction,
objects.Cluster.get_deployment_tasks(cluster, graph_type),
nodes,
selected_task_ids=task_ids)
objects.Transaction.delete(transaction)
return rv
# for old clusters we have to fallback to old serializers
serialized_tasks = task_based_deployment.TasksSerializer.serialize(
cluster,
nodes,
objects.Cluster.get_deployment_tasks(cluster, graph_type),
task_ids=task_ids
)
return {'tasks_directory': serialized_tasks[0],
'tasks_graph': serialized_tasks[1]}
except errors.TaskBaseDeploymentNotAllowed as exc:
raise self.http(400, msg=six.text_type(exc))
def _deployment_info_in_compatible_format(depoyment_info, separate):
# FIXME(bgaifullin) need to update fuelclient
# uid 'common' because fuelclient expects list of dicts, where
# each dict contains field 'uid', which will be used as name of file
data = depoyment_info.get('nodes', [])
common = depoyment_info.get('common')
if common:
if separate:
data.append(dict(common, uid='common'))
else:
for i, node_info in enumerate(data):
data[i] = utils.dict_merge(common, node_info)
return data

View File

@ -1,94 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from nailgun.api.v1.handlers import base
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.handlers.deployment_graph import \
RelatedDeploymentGraphCollectionHandler
from nailgun.api.v1.handlers.deployment_graph import \
RelatedDeploymentGraphHandler
from nailgun.api.v1.validators import plugin
from nailgun import errors
from nailgun import objects
from nailgun.plugins.manager import PluginManager
class PluginHandler(base.SingleHandler):
validator = plugin.PluginValidator
single = objects.Plugin
class PluginCollectionHandler(base.CollectionHandler):
collection = objects.PluginCollection
validator = plugin.PluginValidator
@handle_errors
@validate
def POST(self):
""":returns: JSONized REST object.
:http: * 201 (object successfully created)
* 400 (invalid object data specified)
* 409 (object with such parameters already exists)
"""
data = self.checked_data(self.validator.validate)
obj = self.collection.single.get_by_name_version(
data['name'], data['version'])
if obj:
raise self.http(409, self.collection.single.to_json(obj))
return super(PluginCollectionHandler, self).POST()
class PluginSyncHandler(base.BaseHandler):
validator = plugin.PluginSyncValidator
@handle_errors
@validate
def POST(self):
""":returns: JSONized REST object.
:http: * 200 (plugins successfully synced)
* 404 (plugin not found in db)
* 400 (problem with parsing metadata file)
"""
data = self.checked_data()
ids = data.get('ids', None)
try:
PluginManager.sync_plugins_metadata(plugin_ids=ids)
except errors.ParseError as exc:
raise self.http(400, msg=six.text_type(exc))
raise self.http(200, {})
class PluginDeploymentGraphHandler(RelatedDeploymentGraphHandler):
"""Plugin Handler for deployment graph configuration."""
related = objects.Plugin
class PluginDeploymentGraphCollectionHandler(
RelatedDeploymentGraphCollectionHandler):
"""Plugin Handler for deployment graphs configuration."""
related = objects.Plugin

View File

@ -1,124 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.handlers import base
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators import plugin_link
from nailgun import errors
from nailgun import objects
class PluginLinkHandler(base.SingleHandler):
validator = plugin_link.PluginLinkValidator
single = objects.PluginLink
def _get_plugin_link_object(self, plugin_id, obj_id):
obj = self.get_object_or_404(self.single, obj_id)
if int(plugin_id) == obj.plugin_id:
return obj
else:
raise self.http(
404,
"Plugin with id {0} not found".format(plugin_id)
)
@handle_errors
@validate
@serialize
def GET(self, plugin_id, obj_id):
""":returns: JSONized REST object.
:http: * 200 (OK)
* 404 (dashboard entry not found in db)
"""
obj = self._get_plugin_link_object(plugin_id, obj_id)
return self.single.to_dict(obj)
@handle_errors
@validate
@serialize
def PUT(self, plugin_id, obj_id):
""":returns: JSONized REST object.
:http: * 200 (OK)
* 400 (invalid object data specified)
* 404 (object not found in db)
"""
obj = self._get_plugin_link_object(plugin_id, obj_id)
data = self.checked_data(
self.validator.validate_update,
instance=obj)
self.single.update(obj, data)
return self.single.to_dict(obj)
def PATCH(self, plugin_id, obj_id):
""":returns: JSONized REST object.
:http: * 200 (OK)
* 400 (invalid object data specified)
* 404 (object not found in db)
"""
return self.PUT(plugin_id, obj_id)
@handle_errors
@validate
def DELETE(self, plugin_id, obj_id):
""":returns: JSONized REST object.
:http: * 204 (OK)
* 404 (object not found in db)
"""
obj = self._get_plugin_link_object(plugin_id, obj_id)
self.single.delete(obj)
raise self.http(204)
class PluginLinkCollectionHandler(base.CollectionHandler):
collection = objects.PluginLinkCollection
validator = plugin_link.PluginLinkValidator
@handle_errors
@validate
def GET(self, plugin_id):
""":returns: Collection of JSONized PluginLink objects.
:http: * 200 (OK)
* 404 (plugin not found in db)
"""
self.get_object_or_404(objects.Plugin, plugin_id)
return self.collection.to_list(
self.collection.get_by_plugin_id(plugin_id)
)
@handle_errors
@validate
def POST(self, plugin_id):
""":returns: JSONized REST object.
:http: * 201 (object successfully created)
* 400 (invalid object data specified)
"""
data = self.checked_data()
try:
new_obj = self.collection.create_with_plugin_id(data, plugin_id)
except errors.CannotCreate as exc:
raise self.http(400, exc.message)
raise self.http(201, self.collection.single.to_json(new_obj))

View File

@ -1,165 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handlers dealing with releases
"""
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import OrchestratorDeploymentTasksHandler
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import SingleHandler
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.handlers.deployment_graph import \
RelatedDeploymentGraphCollectionHandler
from nailgun.api.v1.handlers.deployment_graph import \
RelatedDeploymentGraphHandler
from nailgun.api.v1.validators.release import \
ReleaseAttributesMetadataValidator
from nailgun.api.v1.validators.release import ReleaseNetworksValidator
from nailgun.api.v1.validators.release import ReleaseValidator
from nailgun.objects import Release
from nailgun.objects import ReleaseCollection
class ReleaseHandler(SingleHandler):
"""Release single handler"""
single = Release
validator = ReleaseValidator
class ReleaseAttributesMetadataHandler(SingleHandler):
"""Release attributes metadata handler"""
single = Release
validator = ReleaseAttributesMetadataValidator
@handle_errors
@validate
@serialize
def GET(self, obj_id):
""":returns: JSONized Release attributes metadata.
:http: * 200 (OK)
* 404 (release not found in db)
"""
release = self.get_object_or_404(self.single, obj_id)
return release['attributes_metadata']
@handle_errors
@validate
@serialize
def PUT(self, obj_id):
""":returns: JSONized Release attributes metadata.
:http: * 200 (OK)
* 400 (wrong data specified)
* 404 (release not found in db)
"""
release = self.get_object_or_404(self.single, obj_id)
data = self.checked_data()
self.single.update(release, {'attributes_metadata': data})
return release['attributes_metadata']
class ReleaseCollectionHandler(CollectionHandler):
"""Release collection handler"""
validator = ReleaseValidator
collection = ReleaseCollection
@handle_errors
@validate
@serialize
def GET(self):
""":returns: Sorted releases' collection in JSON format
:http: * 200 (OK)
"""
q = sorted(self.collection.all(), reverse=True)
return self.collection.to_list(q)
class ReleaseNetworksHandler(SingleHandler):
"""Release Handler for network metadata"""
single = Release
validator = ReleaseNetworksValidator
@handle_errors
@validate
@serialize
def GET(self, obj_id):
"""Read release networks metadata
:returns: Release networks metadata
:http: * 201 (object successfully created)
* 400 (invalid object data specified)
* 404 (release object not found)
"""
obj = self.get_object_or_404(self.single, obj_id)
return obj['networks_metadata']
@handle_errors
@validate
@serialize
def PUT(self, obj_id):
"""Updates release networks metadata
:returns: Release networks metadata
:http: * 201 (object successfully created)
* 400 (invalid object data specified)
* 404 (release object not found)
"""
obj = self.get_object_or_404(self.single, obj_id)
data = self.checked_data()
self.single.update(obj, {'networks_metadata': data})
return obj['networks_metadata']
def POST(self, obj_id):
"""Creation of metadata disallowed
:http: * 405 (method not supported)
"""
raise self.http(405, 'Create not supported for this entity')
def DELETE(self, obj_id):
"""Deletion of metadata disallowed
:http: * 405 (method not supported)
"""
raise self.http(405, 'Delete not supported for this entity')
class ReleaseDeploymentTasksHandler(OrchestratorDeploymentTasksHandler):
"""Release Handler for deployment tasks configuration (legacy)."""
single = Release
class ReleaseDeploymentGraphHandler(RelatedDeploymentGraphHandler):
"""Release Handler for deployment graph configuration."""
related = Release
class ReleaseDeploymentGraphCollectionHandler(
RelatedDeploymentGraphCollectionHandler):
"""Release Handler for deployment graphs configuration."""
related = Release

View File

@ -1,84 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Handlers for removed resources
"""
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
class BaseRemovedInHandler(BaseHandler):
"""Removed resource base handler"""
@property
def fuel_version(self):
raise NotImplementedError
@handle_errors
@validate
@serialize
def GET(self):
"""A stub for the request. Always returns 410 with removed message.
:http: 410 (Gone)
:raises: webapi.Gone Exception
:return: Removed in Fuel version message
"""
message = u"Removed in Fuel version {0}".format(self.fuel_version)
raise self.http(410, message)
HEAD = POST = PUT = DELETE = GET
class RemovedIn51Handler(BaseRemovedInHandler):
"""Removed resource handler for Fuel 5.1"""
fuel_version = "5.1"
class RemovedIn51RedHatAccountHandler(RemovedIn51Handler):
pass
class RemovedIn51RedHatSetupHandler(RemovedIn51Handler):
pass
class RemovedIn10Handler(BaseRemovedInHandler):
"""Removed resource handler for Fuel 10"""
fuel_version = "10"
@handle_errors
@validate
@serialize
def GET(self, cluster_id):
"""A stub for the request. Always returns 410 with removed message.
:http: 410 (Gone)
:raises: webapi.Gone Exception
:return: Removed in Fuel version message
"""
message = u"Removed in Fuel version {0}".format(self.fuel_version)
raise self.http(410, message)
class RemovedIn10VmwareAttributesDefaultsHandler(RemovedIn10Handler):
pass
class RemovedIn10VmwareAttributesHandler(RemovedIn10Handler):
pass

View File

@ -1,137 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from nailgun.api.v1.handlers import base
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.role import RoleValidator
from nailgun import errors
from nailgun import objects
from nailgun.objects.serializers.role import RoleSerializer
class RoleMixIn(object):
def _get_object_or_404(self, obj_type, obj_id):
obj_cls = {
'releases': objects.Release,
'clusters': objects.Cluster,
}[obj_type]
return obj_cls, self.get_object_or_404(obj_cls, obj_id)
class RoleHandler(base.SingleHandler, RoleMixIn):
validator = RoleValidator
def _check_role(self, obj_cls, obj, role_name):
if role_name not in obj_cls.get_own_roles(obj):
raise self.http(
404,
"Role '{}' is not found for the {} {}".format(
role_name, obj_cls.__name__.lower(), obj.id))
@handle_errors
@validate
@serialize
def GET(self, obj_type, obj_id, role_name):
"""Retrieve role
:http:
* 200 (OK)
* 404 (no such object found)
"""
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
self._check_role(obj_cls, obj, role_name)
return RoleSerializer.serialize_from_obj(obj_cls, obj, role_name)
@handle_errors
@validate
@serialize
def PUT(self, obj_type, obj_id, role_name):
"""Update role
:http:
* 200 (OK)
* 404 (no such object found)
"""
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
self._check_role(obj_cls, obj, role_name)
data = self.checked_data(
self.validator.validate_update, instance_cls=obj_cls, instance=obj)
obj_cls.update_role(obj, data)
return RoleSerializer.serialize_from_obj(obj_cls, obj, role_name)
@handle_errors
def DELETE(self, obj_type, obj_id, role_name):
"""Remove role
:http:
* 204 (object successfully deleted)
* 400 (cannot delete object)
* 404 (no such object found)
"""
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
self._check_role(obj_cls, obj, role_name)
try:
self.validator.validate_delete(obj_cls, obj, role_name)
except errors.CannotDelete as exc:
raise self.http(400, exc.message)
obj_cls.remove_role(obj, role_name)
raise self.http(204)
class RoleCollectionHandler(base.CollectionHandler, RoleMixIn):
validator = RoleValidator
@handle_errors
@validate
def POST(self, obj_type, obj_id):
"""Create role for release or cluster
:http:
* 201 (object successfully created)
* 400 (invalid object data specified)
* 409 (object with such parameters already exists)
"""
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
try:
data = self.checked_data(
self.validator.validate_create,
instance_cls=obj_cls,
instance=obj)
except errors.AlreadyExists as exc:
raise self.http(409, exc.message)
role_name = data['name']
obj_cls.update_role(obj, data)
raise self.http(
201, RoleSerializer.serialize_from_obj(obj_cls, obj, role_name))
@handle_errors
@validate
@serialize
def GET(self, obj_type, obj_id):
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
role_names = six.iterkeys(obj_cls.get_roles(obj))
return [RoleSerializer.serialize_from_obj(obj_cls, obj, name)
for name in role_names]

View File

@ -1,139 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import six
from nailgun.api.v1.handlers import base
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.tag import TagValidator
from nailgun import errors
from nailgun import objects
from nailgun.objects.serializers.tag import TagSerializer
class TagMixIn(object):
def _get_object_or_404(self, obj_type, obj_id):
obj_cls = {
'releases': objects.Release,
'clusters': objects.Cluster,
}[obj_type]
return obj_cls, self.get_object_or_404(obj_cls, obj_id)
class TagHandler(base.SingleHandler, TagMixIn):
validator = TagValidator
def _check_tag(self, obj_cls, obj, tag_name):
if tag_name not in obj_cls.get_own_tags(obj):
raise self.http(
404,
"Tag '{}' is not found for the {} {}".format(
tag_name, obj_cls.__name__.lower(), obj.id))
@handle_errors
@validate
@serialize
def GET(self, obj_type, obj_id, tag_name):
"""Retrieve tag
:http:
* 200 (OK)
* 404 (no such object found)
"""
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
self._check_tag(obj_cls, obj, tag_name)
return TagSerializer.serialize_from_obj(obj_cls, obj, tag_name)
@handle_errors
@validate
@serialize
def PUT(self, obj_type, obj_id, tag_name):
"""Update tag
:http:
* 200 (OK)
* 400 (wrong data specified)
* 404 (no such object found)
"""
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
self._check_tag(obj_cls, obj, tag_name)
data = self.checked_data(
self.validator.validate_update, instance_cls=obj_cls, instance=obj)
obj_cls.update_tag(obj, data)
return TagSerializer.serialize_from_obj(obj_cls, obj, tag_name)
@handle_errors
def DELETE(self, obj_type, obj_id, tag_name):
"""Remove tag
:http:
* 204 (object successfully deleted)
* 400 (cannot delete object)
* 404 (no such object found)
"""
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
self._check_tag(obj_cls, obj, tag_name)
obj_cls.remove_tag(obj, tag_name)
raise self.http(204)
class TagCollectionHandler(base.CollectionHandler, TagMixIn):
validator = TagValidator
@handle_errors
@validate
def POST(self, obj_type, obj_id):
"""Create tag for release or cluster
:http:
* 201 (object successfully created)
* 400 (invalid object data specified)
* 409 (object with such parameters already exists)
* 404 (no such object found)
"""
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
try:
data = self.checked_data(
self.validator.validate_create,
instance_cls=obj_cls,
instance=obj)
except errors.AlreadyExists as exc:
raise self.http(409, exc.message)
tag_name = data['name']
obj_cls.update_tag(obj, data)
raise self.http(
201, TagSerializer.serialize_from_obj(obj_cls, obj, tag_name))
@handle_errors
@validate
@serialize
def GET(self, obj_type, obj_id):
"""Retrieve tag list of release or cluster
:http:
* 200 (OK)
* 404 (no such object found)
"""
obj_cls, obj = self._get_object_or_404(obj_type, obj_id)
tag_names = six.iterkeys(obj_cls.get_tags_metadata(obj))
return [TagSerializer.serialize_from_obj(obj_cls, obj, tag_name)
for tag_name in tag_names]

View File

@ -1,91 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import web
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.base import SingleHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.task import TaskValidator
from nailgun import errors
from nailgun import objects
from nailgun import utils
"""
Handlers dealing with tasks
"""
class TaskHandler(SingleHandler):
"""Task single handler"""
single = objects.Task
validator = TaskValidator
@handle_errors
@validate
def DELETE(self, obj_id):
""":returns: Empty string
:http: * 204 (object successfully marked as deleted)
* 400 (object could not deleted)
* 404 (object not found in db)
"""
obj = self.get_object_or_404(
self.single,
obj_id
)
force = utils.parse_bool(web.input(force='0').force)
try:
self.validator.validate_delete(None, obj, force=force)
except errors.CannotDelete as exc:
raise self.http(400, exc.message)
self.single.delete(obj)
raise self.http(204)
class TaskCollectionHandler(CollectionHandler):
"""Task collection handler"""
collection = objects.TaskCollection
validator = TaskValidator
@handle_errors
@validate
@serialize
def GET(self):
"""May receive cluster_id parameter to filter list of tasks
:returns: Collection of JSONized Task objects.
:http: * 200 (OK)
* 404 (task not found in db)
"""
cluster_id = web.input(cluster_id=None).cluster_id
if cluster_id is not None:
return self.collection.to_list(
self.collection.get_by_cluster_id(cluster_id)
)
else:
return self.collection.to_list(self.collection.all_not_deleted())

View File

@ -1,105 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import web
from nailgun.api.v1.handlers.base import CollectionHandler
from nailgun.api.v1.handlers.tasks import TaskHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators.transaction import TransactionValidator
from nailgun import errors
from nailgun import objects
"""
Handlers dealing with all transactions (tasks)
"""
class TransactionHandler(TaskHandler):
"""Transaction single handler"""
single = objects.Transaction
class TransactionCollectionHandler(CollectionHandler):
"""Transaction collection handler"""
collection = objects.TransactionCollection
validator = TransactionValidator
@handle_errors
@validate
@serialize
def GET(self):
"""May receive cluster_id parameter to filter list of tasks
:returns: Collection of JSONized Task objects.
:http: * 200 (OK)
* 400 (wrong attributes data specified)
"""
cluster_id = web.input(cluster_id=None).cluster_id
statuses = self.get_param_as_set('statuses')
transaction_types = self.get_param_as_set('transaction_types')
try:
self.validator.validate_query(statuses=statuses,
transaction_types=transaction_types)
except errors.InvalidData as exc:
raise self.http(400, exc.message)
return self.collection.to_list(
self.collection.get_transactions(
cluster_id=cluster_id,
statuses=statuses,
transaction_types=transaction_types)
)
class BaseTransactionDataHandler(TransactionHandler):
get_data = None
@handle_errors
@validate
@serialize
def GET(self, transaction_id):
""":returns: Collection of JSONized DeploymentInfo objects.
:http: * 200 (OK)
* 404 (cluster not found in db)
"""
transaction = self.get_object_or_404(objects.Transaction,
transaction_id)
return self.get_data(transaction)
class TransactionDeploymentInfo(BaseTransactionDataHandler):
get_data = objects.Transaction.get_deployment_info
class TransactionClusterSettings(BaseTransactionDataHandler):
get_data = objects.Transaction.get_cluster_settings
class TransactionNetworkSettings(BaseTransactionDataHandler):
get_data = objects.Transaction.get_network_settings

View File

@ -1,42 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Product info handlers
"""
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.settings import settings
class VersionHandler(BaseHandler):
"""Version info handler"""
@handle_errors
@validate
@serialize
def GET(self):
""":returns: FUEL/FUELWeb commit SHA, release version.
:http: * 200 (OK)
"""
version = settings.VERSION
method = settings.AUTH['AUTHENTICATION_METHOD']
version['auth_required'] = method in ['fake', 'keystone']
return version

View File

@ -1,126 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import traceback
import six
import web
from nailgun.api.v1.handlers.base import BaseHandler
from nailgun.api.v1.handlers.base import handle_errors
from nailgun.api.v1.handlers.base import serialize
from nailgun.api.v1.handlers.base import validate
from nailgun.api.v1.validators import node as validators
from nailgun.logger import logger
from nailgun import objects
from nailgun.orchestrator import orchestrator_graph
from nailgun.task import manager
class SpawnVmsHandler(BaseHandler):
"""Handler for provision and spawn vms on virt nodes."""
task_manager = manager.SpawnVMsTaskManager
validator = validators.DeploySelectedNodesValidator
def get_tasks(self, cluster, graph_type):
"""Get deployment tasks for VMs spawning.
:param cluster: models.Cluster instance
:type cluster: models.Cluster
:param graph_type: Deployment graph type
:type graph_type: basestring
:returns: list of tasks ids
:rtype: list[basestring]
"""
tasks = objects.Cluster.get_deployment_tasks(cluster, graph_type)
graph = orchestrator_graph.GraphSolver()
graph.add_tasks(tasks)
subgraph = graph.find_subgraph(end='generate_vms')
return [task['id'] for task in subgraph.topology]
def get_nodes(self, cluster):
return objects.Cluster.get_nodes_to_spawn_vms(cluster)
def handle_task(self, cluster, **kwargs):
nodes = self.get_nodes(cluster)
if nodes:
try:
task_manager = self.task_manager(cluster_id=cluster.id)
task = task_manager.execute(nodes_to_provision_deploy=nodes,
**kwargs)
except Exception as exc:
logger.warn(
u'Cannot execute %s task nodes: %s',
task_manager.__class__.__name__, traceback.format_exc())
raise self.http(400, six.text_type(exc))
self.raise_task(task)
else:
raise self.http(400, "No VMs to spawn")
@handle_errors
@validate
def PUT(self, cluster_id):
""":returns: JSONized Task object.
:http: * 200 (task successfully executed)
* 202 (task scheduled for execution)
* 400 (data validation failed)
* 404 (cluster not found in db)
"""
graph_type = web.input(graph_type=None).graph_type or None
cluster = self.get_object_or_404(objects.Cluster, cluster_id)
data = self.get_tasks(cluster, graph_type)
return self.handle_task(cluster, deployment_tasks=data,
graph_type=graph_type)
class NodeVMsHandler(BaseHandler):
"""Node vms handler"""
validator = validators.NodeVMsValidator
@handle_errors
@validate
@serialize
def GET(self, node_id):
""":returns: JSONized node vms_conf.
:http: * 200 (OK)
* 404 (node not found in db)
"""
node = self.get_object_or_404(objects.Node, node_id)
node_vms = node.vms_conf
return {"vms_conf": node_vms}
@handle_errors
@validate
@serialize
def PUT(self, node_id):
""":returns: JSONized node vms_conf.
:http: * 200 (OK)
* 400 (invalid vmsdata specified)
* 404 (node not found in db)
"""
node = self.get_object_or_404(objects.Node, node_id)
data = self.checked_data()
node.vms_conf = data.get("vms_conf")
return {"vms_conf": node.vms_conf}

View File

@ -1,476 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import web
from nailgun.extensions import get_all_extensions
from nailgun.api.v1.handlers.assignment import NodeAssignmentHandler
from nailgun.api.v1.handlers.assignment import NodeUnassignmentHandler
from nailgun.api.v1.handlers.capacity import CapacityLogCsvHandler
from nailgun.api.v1.handlers.capacity import CapacityLogHandler
from nailgun.api.v1.handlers.cluster import ClusterAttributesDefaultsHandler
from nailgun.api.v1.handlers.cluster import ClusterAttributesDeployedHandler
from nailgun.api.v1.handlers.cluster import ClusterAttributesHandler
from nailgun.api.v1.handlers.cluster import ClusterChangesForceRedeployHandler
from nailgun.api.v1.handlers.cluster import ClusterChangesHandler
from nailgun.api.v1.handlers.cluster import ClusterCollectionHandler
from nailgun.api.v1.handlers.cluster import \
ClusterDeploymentGraphCollectionHandler
from nailgun.api.v1.handlers.cluster import ClusterDeploymentGraphHandler
from nailgun.api.v1.handlers.cluster import ClusterDeploymentTasksHandler
from nailgun.api.v1.handlers.cluster import ClusterExtensionsHandler
from nailgun.api.v1.handlers.cluster import ClusterGeneratedData
from nailgun.api.v1.handlers.cluster import ClusterHandler
from nailgun.api.v1.handlers.cluster import ClusterOwnDeploymentTasksHandler
from nailgun.api.v1.handlers.cluster import \
ClusterPluginsDeploymentTasksHandler
from nailgun.api.v1.handlers.cluster import \
ClusterReleaseDeploymentTasksHandler
from nailgun.api.v1.handlers.cluster import ClusterResetHandler
from nailgun.api.v1.handlers.cluster import ClusterStopDeploymentHandler
from nailgun.api.v1.handlers.component import ComponentCollectionHandler
from nailgun.api.v1.handlers.removed import \
RemovedIn10VmwareAttributesDefaultsHandler
from nailgun.api.v1.handlers.removed import RemovedIn10VmwareAttributesHandler
from nailgun.api.v1.handlers.cluster_plugin_link \
import ClusterPluginLinkCollectionHandler
from nailgun.api.v1.handlers.cluster_plugin_link \
import ClusterPluginLinkHandler
from nailgun.api.v1.handlers.deployment_history \
import DeploymentHistoryCollectionHandler
from nailgun.api.v1.handlers.extension import ExtensionHandler
from nailgun.api.v1.handlers.logs import LogEntryCollectionHandler
from nailgun.api.v1.handlers.logs import LogPackageDefaultConfig
from nailgun.api.v1.handlers.logs import LogPackageHandler
from nailgun.api.v1.handlers.logs import LogSourceByNodeCollectionHandler
from nailgun.api.v1.handlers.logs import LogSourceCollectionHandler
from nailgun.api.v1.handlers.logs import SnapshotDownloadHandler
from nailgun.api.v1.handlers.node_group import NodeGroupCollectionHandler
from nailgun.api.v1.handlers.node_group import NodeGroupHandler
from nailgun.api.v1.handlers.node import NodeAgentHandler
from nailgun.api.v1.handlers.node import NodeAttributesDefaultsHandler
from nailgun.api.v1.handlers.node import NodeAttributesHandler
from nailgun.api.v1.handlers.node import NodeCollectionHandler
from nailgun.api.v1.handlers.node import NodeHandler
from nailgun.api.v1.handlers.node import NodesAllocationStatsHandler
from nailgun.api.v1.handlers.plugin import PluginCollectionHandler
from nailgun.api.v1.handlers.plugin import \
PluginDeploymentGraphCollectionHandler
from nailgun.api.v1.handlers.plugin import PluginDeploymentGraphHandler
from nailgun.api.v1.handlers.plugin import PluginHandler
from nailgun.api.v1.handlers.plugin import PluginSyncHandler
from nailgun.api.v1.handlers.plugin_link import PluginLinkCollectionHandler
from nailgun.api.v1.handlers.plugin_link import PluginLinkHandler
from nailgun.api.v1.handlers.notifications import NotificationCollectionHandler
from nailgun.api.v1.handlers.notifications import \
NotificationCollectionStatsHandler
from nailgun.api.v1.handlers.notifications import \
NotificationStatusHandler
from nailgun.api.v1.handlers.notifications import NotificationHandler
from nailgun.api.v1.handlers.orchestrator import DefaultDeploymentInfo
from nailgun.api.v1.handlers.orchestrator import DefaultPostPluginsHooksInfo
from nailgun.api.v1.handlers.orchestrator import DefaultPrePluginsHooksInfo
from nailgun.api.v1.handlers.orchestrator import DefaultProvisioningInfo
from nailgun.api.v1.handlers.orchestrator import DeploymentInfo
from nailgun.api.v1.handlers.orchestrator import DeploySelectedNodes
from nailgun.api.v1.handlers.orchestrator import DeploySelectedNodesWithTasks
from nailgun.api.v1.handlers.orchestrator import ProvisioningInfo
from nailgun.api.v1.handlers.orchestrator import ProvisionSelectedNodes
from nailgun.api.v1.handlers.orchestrator import SerializedTasksHandler
from nailgun.api.v1.handlers.orchestrator import TaskDeployGraph
from nailgun.api.v1.handlers.release import ReleaseAttributesMetadataHandler
from nailgun.api.v1.handlers.release import ReleaseCollectionHandler
from nailgun.api.v1.handlers.release import \
ReleaseDeploymentGraphCollectionHandler
from nailgun.api.v1.handlers.release import ReleaseDeploymentGraphHandler
from nailgun.api.v1.handlers.release import ReleaseDeploymentTasksHandler
from nailgun.api.v1.handlers.release import ReleaseHandler
from nailgun.api.v1.handlers.release import ReleaseNetworksHandler
from nailgun.api.v1.handlers.role import RoleCollectionHandler
from nailgun.api.v1.handlers.role import RoleHandler
from nailgun.api.v1.handlers.tag import TagCollectionHandler
from nailgun.api.v1.handlers.tag import TagHandler
from nailgun.api.v1.handlers.tasks import TaskCollectionHandler
from nailgun.api.v1.handlers.tasks import TaskHandler
from nailgun.api.v1.handlers.transactions import TransactionClusterSettings
from nailgun.api.v1.handlers.transactions import TransactionCollectionHandler
from nailgun.api.v1.handlers.transactions import TransactionDeploymentInfo
from nailgun.api.v1.handlers.transactions import TransactionHandler
from nailgun.api.v1.handlers.transactions import TransactionNetworkSettings
from nailgun.api.v1.handlers.version import VersionHandler
from nailgun.api.v1.handlers.vms import NodeVMsHandler
from nailgun.api.v1.handlers.vms import SpawnVmsHandler
from nailgun.api.v1.handlers.removed import RemovedIn51RedHatAccountHandler
from nailgun.api.v1.handlers.removed import RemovedIn51RedHatSetupHandler
from nailgun.api.v1.handlers.master_node_settings \
import MasterNodeSettingsHandler
from nailgun.api.v1.handlers.openstack_config \
import OpenstackConfigCollectionHandler
from nailgun.api.v1.handlers.openstack_config \
import OpenstackConfigExecuteHandler
from nailgun.api.v1.handlers.openstack_config import OpenstackConfigHandler
from nailgun.api.v1.handlers.deployment_graph import \
DeploymentGraphCollectionHandler
from nailgun.api.v1.handlers.deployment_graph import \
DeploymentGraphHandler
from nailgun.api.v1.handlers.deployment_graph import GraphsExecutorHandler
from nailgun.api.v1.handlers.deployment_sequence import \
SequenceCollectionHandler
from nailgun.api.v1.handlers.deployment_sequence import SequenceExecutorHandler
from nailgun.api.v1.handlers.deployment_sequence import SequenceHandler
from nailgun.settings import settings
urls = (
r'/releases/?$',
ReleaseCollectionHandler,
r'/releases/(?P<obj_id>\d+)/attributes_metadata/?$',
ReleaseAttributesMetadataHandler,
r'/releases/(?P<obj_id>\d+)/?$',
ReleaseHandler,
r'/releases/(?P<obj_id>\d+)/networks/?$',
ReleaseNetworksHandler,
r'/releases/(?P<obj_id>\d+)/deployment_tasks/?$',
ReleaseDeploymentTasksHandler,
r'/releases/(?P<release_id>\d+)/components/?$',
ComponentCollectionHandler,
r'/(?P<obj_type>releases|clusters)/(?P<obj_id>\d+)/roles/?$',
RoleCollectionHandler,
r'/(?P<obj_type>releases|clusters)/(?P<obj_id>\d+)/roles/'
'(?P<role_name>[a-zA-Z0-9-_]+)/?$',
RoleHandler,
r'/(?P<obj_type>releases|clusters)/(?P<obj_id>\d+)/tags/?$',
TagCollectionHandler,
r'/(?P<obj_type>releases|clusters)/(?P<obj_id>\d+)/tags/'
'(?P<tag_name>[a-zA-Z0-9-_]+)/?$',
TagHandler,
r'/releases/(?P<obj_id>\d+)/deployment_graphs/?$',
ReleaseDeploymentGraphCollectionHandler,
r'/releases/(?P<obj_id>\d+)/deployment_graphs/'
r'(?P<graph_type>[a-zA-Z0-9-_]+)/?$',
ReleaseDeploymentGraphHandler,
r'/clusters/?$',
ClusterCollectionHandler,
r'/clusters/(?P<obj_id>\d+)/?$',
ClusterHandler,
r'/clusters/(?P<cluster_id>\d+)/changes/?$',
ClusterChangesHandler,
r'/clusters/(?P<cluster_id>\d+)/changes/redeploy/?$',
ClusterChangesForceRedeployHandler,
r'/clusters/(?P<cluster_id>\d+)/attributes/?$',
ClusterAttributesHandler,
r'/clusters/(?P<cluster_id>\d+)/attributes/defaults/?$',
ClusterAttributesDefaultsHandler,
r'/clusters/(?P<cluster_id>\d+)/attributes/deployed/?$',
ClusterAttributesDeployedHandler,
r'/clusters/(?P<cluster_id>\d+)/orchestrator/deployment/?$',
DeploymentInfo,
r'/clusters/(?P<cluster_id>\d+)/orchestrator/deployment/defaults/?$',
DefaultDeploymentInfo,
r'/clusters/(?P<cluster_id>\d+)/orchestrator/provisioning/?$',
ProvisioningInfo,
r'/clusters/(?P<cluster_id>\d+)/orchestrator/provisioning/defaults/?$',
DefaultProvisioningInfo,
r'/clusters/(?P<cluster_id>\d+)/generated/?$',
ClusterGeneratedData,
r'/clusters/(?P<cluster_id>\d+)/orchestrator/plugins_pre_hooks/?$',
DefaultPrePluginsHooksInfo,
r'/clusters/(?P<cluster_id>\d+)/orchestrator/plugins_post_hooks/?$',
DefaultPostPluginsHooksInfo,
r'/clusters/(?P<cluster_id>\d+)/serialized_tasks/?$',
SerializedTasksHandler,
r'/clusters/(?P<cluster_id>\d+)/provision/?$',
ProvisionSelectedNodes,
r'/clusters/(?P<cluster_id>\d+)/deploy/?$',
DeploySelectedNodes,
r'/clusters/(?P<cluster_id>\d+)/deploy_tasks/?$',
DeploySelectedNodesWithTasks,
r'/clusters/(?P<cluster_id>\d+)/deploy_tasks/graph.gv$',
TaskDeployGraph,
r'/clusters/(?P<cluster_id>\d+)/stop_deployment/?$',
ClusterStopDeploymentHandler,
r'/clusters/(?P<cluster_id>\d+)/reset/?$',
ClusterResetHandler,
r'/clusters/(?P<obj_id>\d+)/deployment_tasks/?$',
ClusterDeploymentTasksHandler,
r'/clusters/(?P<obj_id>\d+)/deployment_tasks/own/?$',
ClusterOwnDeploymentTasksHandler,
r'/clusters/(?P<obj_id>\d+)/deployment_tasks/plugins/?$',
ClusterPluginsDeploymentTasksHandler,
r'/clusters/(?P<obj_id>\d+)/deployment_tasks/release/?$',
ClusterReleaseDeploymentTasksHandler,
r'/clusters/(?P<obj_id>\d+)/deployment_graphs/?$',
ClusterDeploymentGraphCollectionHandler,
r'/clusters/(?P<obj_id>\d+)/deployment_graphs/'
r'(?P<graph_type>[a-zA-Z0-9-_]+)/?$',
ClusterDeploymentGraphHandler,
r'/graphs/?$',
DeploymentGraphCollectionHandler,
r'/graphs/(?P<obj_id>\d+)/?$',
DeploymentGraphHandler,
r'/graphs/execute/?$',
GraphsExecutorHandler,
r'/sequences/?$',
SequenceCollectionHandler,
r'/sequences/(?P<obj_id>\d+)/?$',
SequenceHandler,
r'/sequences/(?P<obj_id>\d+)/execute/?$',
SequenceExecutorHandler,
r'/clusters/(?P<cluster_id>\d+)/assignment/?$',
NodeAssignmentHandler,
r'/clusters/(?P<cluster_id>\d+)/unassignment/?$',
NodeUnassignmentHandler,
r'/clusters/(?P<cluster_id>\d+)/vmware_attributes/?$',
RemovedIn10VmwareAttributesHandler,
r'/clusters/(?P<cluster_id>\d+)/vmware_attributes/defaults/?$',
RemovedIn10VmwareAttributesDefaultsHandler,
r'/clusters/(?P<cluster_id>\d+)/plugin_links/?$',
ClusterPluginLinkCollectionHandler,
r'/clusters/(?P<cluster_id>\d+)/plugin_links/(?P<obj_id>\d+)/?$',
ClusterPluginLinkHandler,
r'/extensions/?$',
ExtensionHandler,
r'/clusters/(?P<cluster_id>\d+)/extensions/?$',
ClusterExtensionsHandler,
r'/nodegroups/?$',
NodeGroupCollectionHandler,
r'/nodegroups/(?P<obj_id>\d+)/?$',
NodeGroupHandler,
r'/nodes/?$',
NodeCollectionHandler,
r'/nodes/agent/?$',
NodeAgentHandler,
r'/nodes/(?P<obj_id>\d+)/?$',
NodeHandler,
r'/nodes/(?P<node_id>\d+)/attributes/?$',
NodeAttributesHandler,
r'/nodes/(?P<node_id>\d+)/attributes/defaults/?$',
NodeAttributesDefaultsHandler,
r'/nodes/allocation/stats/?$',
NodesAllocationStatsHandler,
r'/tasks/?$',
TaskCollectionHandler,
r'/tasks/(?P<obj_id>\d+)/?$',
TaskHandler,
r'/transactions/?$',
TransactionCollectionHandler,
r'/transactions/(?P<obj_id>\d+)/?$',
TransactionHandler,
r'/transactions/(?P<transaction_id>\d+)/deployment_history/?$',
DeploymentHistoryCollectionHandler,
r'/transactions/(?P<transaction_id>\d+)/deployment_info/?$',
TransactionDeploymentInfo,
r'/transactions/(?P<transaction_id>\d+)/network_configuration/?$',
TransactionNetworkSettings,
r'/transactions/(?P<transaction_id>\d+)/settings/?$',
TransactionClusterSettings,
r'/plugins/(?P<plugin_id>\d+)/links/?$',
PluginLinkCollectionHandler,
r'/plugins/(?P<plugin_id>\d+)/links/(?P<obj_id>\d+)/?$',
PluginLinkHandler,
r'/plugins/(?P<obj_id>\d+)/?$',
PluginHandler,
r'/plugins/?$',
PluginCollectionHandler,
r'/plugins/sync/?$',
PluginSyncHandler,
r'/plugins/(?P<obj_id>\d+)/deployment_graphs/?$',
PluginDeploymentGraphCollectionHandler,
r'/plugins/(?P<obj_id>\d+)/deployment_graphs/'
r'(?P<graph_type>[a-zA-Z0-9-_]+)/?$',
PluginDeploymentGraphHandler,
r'/notifications/?$',
NotificationCollectionHandler,
r'/notifications/change_status/?$',
NotificationStatusHandler,
r'/notifications/(?P<obj_id>\d+)/?$',
NotificationHandler,
r'/notifications/stats/?$',
NotificationCollectionStatsHandler,
r'/dump/(?P<snapshot_name>[A-Za-z0-9-_.]+)$',
SnapshotDownloadHandler,
r'/logs/?$',
LogEntryCollectionHandler,
r'/logs/package/?$',
LogPackageHandler,
r'/logs/package/config/default/?$',
LogPackageDefaultConfig,
r'/logs/sources/?$',
LogSourceCollectionHandler,
r'/logs/sources/nodes/(?P<node_id>\d+)/?$',
LogSourceByNodeCollectionHandler,
r'/version/?$',
VersionHandler,
r'/capacity/?$',
CapacityLogHandler,
r'/capacity/csv/?$',
CapacityLogCsvHandler,
r'/redhat/account/?$',
RemovedIn51RedHatAccountHandler,
r'/redhat/setup/?$',
RemovedIn51RedHatSetupHandler,
r'/settings/?$',
MasterNodeSettingsHandler,
r'/openstack-config/?$',
OpenstackConfigCollectionHandler,
r'/openstack-config/(?P<obj_id>\d+)/?$',
OpenstackConfigHandler,
r'/openstack-config/execute/?$',
OpenstackConfigExecuteHandler,
)
feature_groups_urls = {
'advanced': (
r'/clusters/(?P<cluster_id>\d+)/spawn_vms/?$',
SpawnVmsHandler,
r'/nodes/(?P<node_id>\d+)/vms_conf/?$',
NodeVMsHandler,
)
}
urls = [i if isinstance(i, str) else i.__name__ for i in urls]
_locals = locals()
def get_extensions_urls():
"""Get handlers and urls from extensions, convert them into web.py format
:returns: dict in the next format:
{'urls': (r'/url/', 'ClassName'),
'handlers': [{
'class': ClassName,
'name': 'ClassName'}]}
"""
urls = []
handlers = []
for extension in get_all_extensions():
for url in extension.urls:
# TODO(eli): handler name should be extension specific
# not to have problems when several extensions use
# the same name for handler classes.
# Should be done as a part of blueprint:
# https://blueprints.launchpad.net/fuel/+spec
# /volume-manager-refactoring
handler_name = url['handler'].__name__
handlers.append({
'class': url['handler'],
'name': handler_name})
urls.extend((url['uri'], handler_name))
return {'urls': urls, 'handlers': handlers}
def get_feature_groups_urls():
"""Method for retrieving urls dependant on feature groups
Feature groups can be 'experimental' or 'advanced' which should be
enable only for this modes.
:returns: list of urls
"""
urls = []
for feature in settings.VERSION['feature_groups']:
urls.extend([i if isinstance(i, str) else i.__name__ for i in
feature_groups_urls.get(feature, [])])
return urls
def get_all_urls():
"""Merges urls and handlers from core and from extensions"""
ext_urls = get_extensions_urls()
all_urls = list(urls)
all_urls.extend(get_feature_groups_urls())
all_urls.extend(ext_urls['urls'])
for handler in ext_urls['handlers']:
_locals[handler['name']] = handler['class']
return [all_urls, _locals]
def app():
return web.application(*get_all_urls())
def public_urls():
return {
r'/nodes/?$': ['POST'],
r'/nodes/agent/?$': ['PUT'],
r'/clusters/(?P<cluster_id>\d+)/plugin_links/?$': ['POST']
}
def cookie_urls():
return [
r'/api(/v[0-9]+)?/dump/[A-Za-z0-9-_.]+$',
r'/api(/v[0-9]+)?/capacity/csv/?$'
]

View File

@ -1,167 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sqlalchemy as sa
from nailgun.api.v1.validators.base import BasicValidator
from nailgun.api.v1.validators.json_schema.assignment \
import assignment_format_schema
from nailgun.api.v1.validators.json_schema.assignment \
import unassignment_format_schema
from nailgun.db import db
from nailgun.db.sqlalchemy.models import Node
from nailgun import errors
from nailgun import objects
from nailgun.settings import settings
from nailgun.utils.restrictions import RestrictionBase
class AssignmentValidator(BasicValidator):
@staticmethod
def check_all_nodes(nodes, node_ids):
not_found_node_ids = set(node_ids) - set(n.id for n in nodes)
if not_found_node_ids:
raise errors.InvalidData(
u"Nodes with ids {0} were not found."
.format(
", ".join(map(str, not_found_node_ids))
), log_message=True
)
@classmethod
def check_unique_hostnames(cls, nodes, cluster_id):
hostnames = [node.hostname for node in nodes]
node_ids = [node.id for node in nodes]
conflicting_hostnames = [
x[0] for x in
db.query(
Node.hostname).filter(sa.and_(
~Node.id.in_(node_ids),
Node.hostname.in_(hostnames),
Node.cluster_id == cluster_id,
)
).all()
]
if conflicting_hostnames:
raise errors.AlreadyExists(
"Nodes with hostnames [{0}] already exist in cluster {1}."
.format(", ".join(conflicting_hostnames), cluster_id)
)
class NodeAssignmentValidator(AssignmentValidator):
@classmethod
def validate_collection_update(cls, data, cluster_id=None):
data = cls.validate_json(data)
cls.validate_schema(data, assignment_format_schema)
dict_data = dict((d["id"], d["roles"]) for d in data)
received_node_ids = dict_data.keys()
nodes = db.query(Node).filter(Node.id.in_(received_node_ids))
cls.check_all_nodes(nodes, received_node_ids)
cluster = objects.Cluster.get_by_uid(
cluster_id, fail_if_not_found=True
)
cls.check_unique_hostnames(nodes, cluster_id)
for node_id in received_node_ids:
cls.validate_roles(
cluster,
dict_data[node_id]
)
return dict_data
@classmethod
def validate_roles(cls, cluster, roles):
available_roles = objects.Cluster.get_roles(cluster)
roles = set(roles)
not_valid_roles = roles - set(available_roles)
if not_valid_roles:
raise errors.InvalidData(
u"{0} are not valid roles for node in environment {1}"
.format(u", ".join(not_valid_roles), cluster.id),
log_message=True
)
cls.check_roles_for_conflicts(roles, available_roles)
cls.check_roles_requirement(
roles,
available_roles,
{
'settings': objects.Cluster.get_editable_attributes(cluster),
'cluster': cluster,
'version': settings.VERSION,
})
@classmethod
def check_roles_for_conflicts(cls, roles, roles_metadata):
all_roles = set(roles_metadata.keys())
for role in roles:
if "conflicts" in roles_metadata[role]:
other_roles = roles - set([role])
conflicting_roles = roles_metadata[role]["conflicts"]
if conflicting_roles == "*":
conflicting_roles = all_roles - set([role])
else:
conflicting_roles = set(conflicting_roles)
conflicting_roles &= other_roles
if conflicting_roles:
raise errors.InvalidNodeRole(
"Role '{0}' in conflict with role '{1}'."
.format(role, ", ".join(conflicting_roles)),
log_message=True
)
@classmethod
def check_roles_requirement(cls, roles, roles_metadata, models):
for role in roles:
if "restrictions" in roles_metadata[role]:
result = RestrictionBase.check_restrictions(
models, roles_metadata[role]['restrictions']
)
if result['result']:
raise errors.InvalidNodeRole(
"Role '{}' restrictions mismatch: {}"
.format(role, result['message'])
)
class NodeUnassignmentValidator(AssignmentValidator):
@classmethod
def validate_collection_update(cls, data, cluster_id=None):
list_data = cls.validate_json(data)
cls.validate_schema(list_data, unassignment_format_schema)
node_ids_set = set(n['id'] for n in list_data)
nodes = db.query(Node).filter(Node.id.in_(node_ids_set))
node_id_cluster_map = dict(
(n.id, n.cluster_id) for n in
db.query(Node.id, Node.cluster_id).filter(
Node.id.in_(node_ids_set)))
other_cluster_ids_set = set(node_id_cluster_map.values()) - \
set((int(cluster_id),))
if other_cluster_ids_set:
raise errors.InvalidData(
u"Nodes [{0}] are not members of environment {1}."
.format(
u", ".join(
str(n_id) for n_id, c_id in
node_id_cluster_map.iteritems()
if c_id in other_cluster_ids_set
), cluster_id), log_message=True
)
cls.check_all_nodes(nodes, node_ids_set)
return nodes

View File

@ -1,237 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import copy
import jsonschema
from jsonschema import exceptions
from oslo_serialization import jsonutils
import six
from nailgun.api.v1.validators.json_schema import base_types
from nailgun import errors
from nailgun import objects
from nailgun.utils import restrictions
class BasicValidator(object):
single_schema = None
collection_schema = None
@classmethod
def validate_json(cls, data):
# todo(ikutukov): this method not only validation json but also
# returning parsed data
if data:
try:
res = jsonutils.loads(data)
except Exception:
raise errors.JsonDecodeError(
"Invalid json received",
log_message=True
)
else:
raise errors.InvalidData(
"Empty request received",
log_message=True
)
return res
@classmethod
def validate_request(cls, req, resource_type,
single_schema=None,
collection_schema=None):
json_req = cls.validate_json(req)
use_schema = {
"single": single_schema or cls.single_schema,
"collection": collection_schema or cls.collection_schema
}.get(resource_type)
try:
jsonschema.validate(json_req, use_schema)
except exceptions.ValidationError as exc:
if len(exc.path) > 0:
raise errors.JsonValidationError(
# NOTE(ikutukov): here was a exc.path.pop(). It was buggy
# because JSONSchema error path could contain integers
# and joining integers as string is not a good idea in
# python. So some schema error messages were not working
# properly and give 500 error code except 400.
": ".join([six.text_type(exc.path), exc.message])
)
raise errors.JsonValidationError(exc.message)
@classmethod
def validate(cls, data):
return cls.validate_json(data)
@classmethod
def validate_schema(cls, data, schema):
"""Validate a given data with a given schema.
:param data: a data to validate represented as a dict
:param schema: a schema to validate represented as a dict;
must be in JSON Schema Draft 4 format.
"""
try:
checker = jsonschema.FormatChecker()
jsonschema.validate(data, schema, format_checker=checker)
except Exception as exc:
# We need to cast a given exception to the string since it's the
# only way to print readable validation error. Unfortunately,
# jsonschema has no base class for exceptions, so we don't know
# about internal attributes with error description.
raise errors.InvalidData(str(exc))
@classmethod
def validate_release(cls, data=None, cluster=None, graph_type=None):
"""Validate if deployment tasks are present in db
:param data: data
:param cluster: Cluster instance
:param graph_type: deployment graph type
:raises NoDeploymentTasks:
"""
# TODO(akostrikov) https://bugs.launchpad.net/fuel/+bug/1561485
if (cluster and objects.Release.is_granular_enabled(cluster.release)
and not objects.Cluster.get_deployment_tasks(
cluster, graph_type)):
raise errors.NoDeploymentTasks(
"There are no deployment tasks for graph type '{}'. "
"Checked cluster (ID={}), its plugins and release (ID={})."
"".format(graph_type, cluster.id, cluster.release.id))
@classmethod
def validate_ids_list(cls, data):
"""Validate list of integer identifiers.
:param data: ids list to be validated and converted
:type data: iterable of strings
:returns: converted and verified data
:rtype: list of integers
"""
try:
ret = [int(d) for d in data]
except ValueError:
raise errors.InvalidData('Comma-separated numbers list expected',
log_message=True)
cls.validate_schema(ret, base_types.IDS_ARRAY)
return ret
class BaseDefferedTaskValidator(BasicValidator):
@classmethod
def validate(cls, cluster):
pass
class BasicAttributesValidator(BasicValidator):
@classmethod
def validate(cls, data):
attrs = cls.validate_json(data)
cls.validate_attributes(attrs)
return attrs
@classmethod
def validate_attributes(cls, data, models=None, force=False):
"""Validate attributes.
:param data: attributes
:type data: dict
:param models: models which are used in
restrictions conditions
:type models: dict
:param force: don't check restrictions
:type force: bool
"""
for attrs in six.itervalues(data):
if not isinstance(attrs, dict):
continue
for attr_name, attr in six.iteritems(attrs):
cls.validate_attribute(attr_name, attr)
# If settings are present restrictions can be checked
if models and not force:
restrict_err = restrictions.AttributesRestriction.check_data(
models, data)
if restrict_err:
raise errors.InvalidData(
"Some restrictions didn't pass verification: {}"
.format(restrict_err))
return data
@classmethod
def validate_attribute(cls, attr_name, attr):
"""Validates a single attribute from settings.yaml.
Dict is of this form::
description: <description>
label: <label>
restrictions:
- <restriction>
- <restriction>
- ...
type: <type>
value: <value>
weight: <weight>
regex:
error: <error message>
source: <regexp source>
We validate that 'value' corresponds to 'type' according to
attribute_type_schemas mapping in json_schema/cluster.py.
If regex is present, we additionally check that the provided string
value matches the regexp.
:param attr_name: Name of the attribute being checked
:param attr: attribute value
:return: attribute or raise InvalidData exception
"""
if not isinstance(attr, dict):
return attr
if 'type' not in attr and 'value' not in attr:
return attr
schema = copy.deepcopy(base_types.ATTRIBUTE_SCHEMA)
type_ = attr.get('type')
if type_:
value_schema = base_types.ATTRIBUTE_TYPE_SCHEMAS.get(type_)
if value_schema:
schema['properties'].update(value_schema)
try:
cls.validate_schema(attr, schema)
except errors.JsonValidationError as e:
raise errors.JsonValidationError(
'[{0}] {1}'.format(attr_name, e.message))
# Validate regexp only if some value is present
# Otherwise regexp might be invalid
if attr['value']:
regex_err = restrictions.AttributesRestriction.validate_regex(attr)
if regex_err is not None:
raise errors.JsonValidationError(
'[{0}] {1}'.format(attr_name, regex_err))

View File

@ -1,299 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from distutils import version
import six
import sqlalchemy as sa
from nailgun.api.v1.validators import base
from nailgun.api.v1.validators.json_schema import cluster as cluster_schema
from nailgun.api.v1.validators.node import ProvisionSelectedNodesValidator
from nailgun import consts
from nailgun.db import db
from nailgun.db.sqlalchemy.models import Node
from nailgun import errors
from nailgun import objects
from nailgun.plugins.manager import PluginManager
from nailgun.utils.restrictions import ComponentsRestrictions
class ClusterValidator(base.BasicValidator):
single_schema = cluster_schema.single_schema
collection_schema = cluster_schema.collection_schema
_blocked_for_update = (
'net_provider',
)
@classmethod
def _validate_common(cls, data, instance=None):
d = cls.validate_json(data)
release_id = d.get("release", d.get("release_id"))
if release_id:
release = objects.Release.get_by_uid(release_id)
if not release:
raise errors.InvalidData(
"Invalid release ID", log_message=True)
if not objects.Release.is_deployable(release):
raise errors.NotAllowed(
"Release with ID '{0}' is not deployable.".format(
release_id), log_message=True)
cls._validate_mode(d, release)
return d
@classmethod
def _validate_components(cls, release_id, components_list):
release = objects.Release.get_by_uid(release_id)
release_components = objects.Release.get_all_components(release)
ComponentsRestrictions.validate_components(
components_list,
release_components,
release.required_component_types)
@classmethod
def validate(cls, data):
d = cls._validate_common(data)
# TODO(ikalnitsky): move it to _validate_common when
# PATCH method will be implemented
release_id = d.get("release", d.get("release_id", None))
if not release_id:
raise errors.InvalidData(
u"Release ID is required", log_message=True)
if "name" in d:
if objects.ClusterCollection.filter_by(
None, name=d["name"]).first():
raise errors.AlreadyExists(
"Environment with this name already exists",
log_message=True
)
if "components" in d:
cls._validate_components(release_id, d['components'])
return d
@classmethod
def validate_update(cls, data, instance):
d = cls._validate_common(data, instance=instance)
if "name" in d:
query = objects.ClusterCollection.filter_by_not(
None, id=instance.id)
if objects.ClusterCollection.filter_by(
query, name=d["name"]).first():
raise errors.AlreadyExists(
"Environment with this name already exists",
log_message=True
)
for k in cls._blocked_for_update:
if k in d and getattr(instance, k) != d[k]:
raise errors.InvalidData(
u"Changing '{0}' for environment is prohibited".format(k),
log_message=True
)
cls._validate_mode(d, instance.release)
if 'nodes' in d:
# Here d['nodes'] is list of node IDs
# to be assigned to the cluster.
cls._validate_nodes(d['nodes'], instance)
return d
@classmethod
def _validate_mode(cls, data, release):
mode = data.get("mode")
if mode and mode not in release.modes:
modes_list = ', '.join(release.modes)
raise errors.InvalidData(
"Cannot deploy in {0} mode in current release."
" Need to be one of: {1}".format(
mode, modes_list),
log_message=True
)
@classmethod
def _validate_nodes(cls, new_node_ids, instance):
set_new_node_ids = set(new_node_ids)
set_old_node_ids = set(objects.Cluster.get_nodes_ids(instance))
nodes_to_add = set_new_node_ids - set_old_node_ids
nodes_to_remove = set_old_node_ids - set_new_node_ids
hostnames_to_add = [x[0] for x in db.query(Node.hostname)
.filter(Node.id.in_(nodes_to_add)).all()]
duplicated = [x[0] for x in db.query(Node.hostname).filter(
sa.and_(
Node.hostname.in_(hostnames_to_add),
Node.cluster_id == instance.id,
Node.id.notin_(nodes_to_remove)
)
).all()]
if duplicated:
raise errors.AlreadyExists(
"Nodes with hostnames [{0}] already exist in cluster {1}."
.format(",".join(duplicated), instance.id)
)
class ClusterAttributesValidator(base.BasicAttributesValidator):
@classmethod
def validate(cls, data, cluster=None, force=False):
d = cls.validate_json(data)
if "generated" in d:
raise errors.InvalidData(
"It is not allowed to update generated attributes",
log_message=True
)
if "editable" in d and not isinstance(d["editable"], dict):
raise errors.InvalidData(
"Editable attributes should be a dictionary",
log_message=True
)
attrs = d
models = None
if cluster is not None:
attrs = objects.Cluster.get_updated_editable_attributes(cluster, d)
cls.validate_provision(cluster, attrs)
cls.validate_allowed_attributes(cluster, d, force)
models = objects.Cluster.get_restrictions_models(
cluster, attrs=attrs.get('editable', {}))
cls.validate_attributes(attrs.get('editable', {}), models, force=force)
return d
@classmethod
def validate_provision(cls, cluster, attrs):
# NOTE(agordeev): disable classic provisioning for 7.0 or higher
if version.StrictVersion(cluster.release.environment_version) >= \
version.StrictVersion(consts.FUEL_IMAGE_BASED_ONLY):
provision_data = attrs['editable'].get('provision')
if provision_data:
if provision_data['method']['value'] != \
consts.PROVISION_METHODS.image:
raise errors.InvalidData(
u"Cannot use classic provisioning for adding "
u"nodes to environment",
log_message=True)
else:
raise errors.InvalidData(
u"Provisioning method is not set. Unable to continue",
log_message=True)
@classmethod
def validate_allowed_attributes(cls, cluster, data, force):
"""Validates if attributes are hot pluggable or not.
:param cluster: A cluster instance
:type cluster: nailgun.db.sqlalchemy.models.cluster.Cluster
:param data: Changed attributes of cluster
:type data: dict
:param force: Allow forcefully update cluster attributes
:type force: bool
:raises: errors.NotAllowed
"""
# TODO(need to enable restrictions check for cluster attributes[1])
# [1] https://bugs.launchpad.net/fuel/+bug/1519904
# Validates only that plugin can be installed on deployed env.
# If cluster is locked we have to check which attributes
# we want to change and block an entire operation if there
# one with always_editable=False.
if not cluster.is_locked or force:
return
editable_cluster = objects.Cluster.get_editable_attributes(
cluster, all_plugins_versions=True)
editable_request = data.get('editable', {})
for attr_name, attr_request in six.iteritems(editable_request):
attr_cluster = editable_cluster.get(attr_name, {})
meta_cluster = attr_cluster.get('metadata', {})
meta_request = attr_request.get('metadata', {})
if PluginManager.is_plugin_data(attr_cluster):
if meta_request['enabled']:
changed_ids = [meta_request['chosen_id']]
if meta_cluster['enabled']:
changed_ids.append(meta_cluster['chosen_id'])
changed_ids = set(changed_ids)
elif meta_cluster['enabled']:
changed_ids = [meta_cluster['chosen_id']]
else:
continue
for plugin in meta_cluster['versions']:
plugin_id = plugin['metadata']['plugin_id']
always_editable = plugin['metadata']\
.get('always_editable', False)
if plugin_id in changed_ids and not always_editable:
raise errors.NotAllowed(
"Plugin '{0}' version '{1}' couldn't be changed "
"after or during deployment."
.format(attr_name,
plugin['metadata']['plugin_version']),
log_message=True
)
elif not meta_cluster.get('always_editable', False):
raise errors.NotAllowed(
"Environment attribute '{0}' couldn't be changed "
"after or during deployment.".format(attr_name),
log_message=True
)
class ClusterChangesValidator(base.BaseDefferedTaskValidator):
@classmethod
def validate(cls, cluster, graph_type=None):
cls.validate_release(cluster=cluster, graph_type=graph_type)
ProvisionSelectedNodesValidator.validate_provision(None, cluster)
class ClusterStopDeploymentValidator(base.BaseDefferedTaskValidator):
@classmethod
def validate(cls, cluster):
super(ClusterStopDeploymentValidator, cls).validate(cluster)
# NOTE(aroma): the check must regard the case when stop deployment
# is called for cluster that was created before master node upgrade
# to versions >= 8.0 and so having 'deployed_before' flag absent
# in their attributes.
# NOTE(vsharshov): task based deployment (>=9.0) implements
# safe way to stop deployment action, so we can enable
# stop deployment for such cluster without restrictions.
# But it is still need to be disabled for old env < 9.0
# which was already deployed once[1]
# [1]: https://bugs.launchpad.net/fuel/+bug/1529691
generated = cluster.attributes.generated
if generated.get('deployed_before', {}).get('value') and\
not objects.Release.is_lcm_supported(cluster.release):
raise errors.CannotBeStopped('Current deployment process is '
'running on a pre-deployed cluster '
'that does not support LCM.')

View File

@ -1,57 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.validators.base import BasicValidator
from nailgun.api.v1.validators.json_schema import plugin_link
from nailgun import errors
from nailgun import objects
class ClusterPluginLinkValidator(BasicValidator):
collection_schema = plugin_link.PLUGIN_LINKS_SCHEMA
@classmethod
def validate(cls, data, **kwargs):
parsed = super(ClusterPluginLinkValidator, cls).validate(data)
cls.validate_schema(parsed, plugin_link.PLUGIN_LINK_SCHEMA)
if objects.ClusterPluginLinkCollection.filter_by(
None,
url=parsed['url'],
cluster_id=kwargs['cluster_id']
).first():
raise errors.AlreadyExists(
"Cluster plugin link with URL {0} and cluster ID={1} already "
"exists".format(parsed['url'], kwargs['cluster_id']),
log_message=True)
return parsed
@classmethod
def validate_update(cls, data, instance):
parsed = super(ClusterPluginLinkValidator, cls).validate(data)
cls.validate_schema(parsed, plugin_link.PLUGIN_LINK_UPDATE_SCHEMA)
cluster_id = parsed.get('cluster_id', instance.cluster_id)
if objects.ClusterPluginLinkCollection.filter_by_not(
objects.ClusterPluginLinkCollection.filter_by(
None,
url=parsed.get('url', instance.url),
cluster_id=cluster_id,
),
id=instance.id
).first():
raise errors.AlreadyExists(
"Cluster plugin link with URL {0} and cluster ID={1} already "
"exists".format(parsed['url'], cluster_id),
log_message=True)
return parsed

View File

@ -1,86 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.validators.base import BasicValidator
from nailgun.api.v1.validators.json_schema import deployment_graph as schema
from nailgun import errors
from nailgun import objects
class DeploymentGraphValidator(BasicValidator):
single_schema = schema.DEPLOYMENT_GRAPH_SCHEMA
collection_schema = schema.DEPLOYMENT_GRAPHS_SCHEMA
@classmethod
def validate(cls, data):
parsed = super(DeploymentGraphValidator, cls).validate(data)
cls.check_tasks_duplicates(parsed)
return parsed
@classmethod
def validate_update(cls, data, instance):
parsed = super(DeploymentGraphValidator, cls).validate(data)
cls.validate_schema(
parsed,
cls.single_schema
)
cls.check_tasks_duplicates(parsed)
return parsed
@classmethod
def check_tasks_duplicates(cls, parsed):
tasks = parsed.get('tasks', [])
ids = set()
dup = set()
for task in tasks:
if task['id'] in ids:
dup.add(task['id'])
else:
ids.add(task['id'])
if dup:
raise errors.InvalidData(
"Tasks duplication found: {0}".format(
', '.join(sorted(dup)))
)
class GraphExecuteParamsValidator(BasicValidator):
single_schema = schema.GRAPH_EXECUTE_PARAMS_SCHEMA
@classmethod
def validate(cls, data):
parsed = cls.validate_json(data)
cls.validate_schema(parsed, cls.single_schema)
nodes_to_check = set()
for graph in parsed['graphs']:
nodes_to_check.update(graph.get('nodes') or [])
if nodes_to_check:
cls.validate_nodes(nodes_to_check, parsed['cluster'])
return parsed
@classmethod
def validate_nodes(cls, ids, cluster_id):
nodes = objects.NodeCollection.filter_by(None, cluster_id=cluster_id)
nodes = objects.NodeCollection.filter_by_list(nodes, 'id', ids)
if nodes.count() != len(ids):
raise errors.InvalidData(
'Nodes {} do not belong to the same cluster {}'
.format(', '.join(ids), cluster_id)
)

View File

@ -1,31 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.validators.base import BasicValidator
from nailgun import consts
from nailgun import errors
class DeploymentHistoryValidator(BasicValidator):
@classmethod
def validate_query(cls, nodes_ids, statuses, tasks_names):
if not statuses:
return
if not statuses.issubset(set(consts.HISTORY_TASK_STATUSES)):
raise errors.ValidationException(
"Statuses parameter could be only: {}".format(
", ".join(consts.HISTORY_TASK_STATUSES))
)

View File

@ -1,65 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.validators.base import BasicValidator
from nailgun.api.v1.validators.json_schema import deployment_sequence as schema
from nailgun import errors
from nailgun import objects
class SequenceValidator(BasicValidator):
single_schema = schema.CREATE_SEQUENCE_SCHEMA
update_schema = schema.UPDATE_SEQUENCE_SCHEMA
@classmethod
def validate(cls, data):
parsed = cls.validate_json(data)
cls.validate_schema(
parsed,
cls.single_schema
)
release = objects.Release.get_by_uid(
parsed.pop('release'), fail_if_not_found=True
)
parsed['release_id'] = release.id
if objects.DeploymentSequence.get_by_name_for_release(
release, parsed['name']):
raise errors.AlreadyExists(
'Sequence with name "{0}" already exist for release {1}.'
.format(parsed['name'], release.id)
)
return parsed
@classmethod
def validate_update(cls, data, instance):
parsed = cls.validate_json(data)
cls.validate_schema(parsed, cls.update_schema)
return parsed
@classmethod
def validate_delete(cls, *args, **kwargs):
pass
class SequenceExecutorValidator(BasicValidator):
single_schema = schema.SEQUENCE_EXECUTION_PARAMS
@classmethod
def validate(cls, data):
parsed = cls.validate_json(data)
cls.validate_schema(parsed, cls.single_schema)
return parsed

View File

@ -1,48 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.validators.base import BasicValidator
from nailgun.api.v1.validators.json_schema import extension as extension_schema
from nailgun import errors
from nailgun.extensions import get_all_extensions
class ExtensionValidator(BasicValidator):
single_scheme = extension_schema.single_schema
collection_schema = extension_schema.collection_schema
@classmethod
def validate(cls, data):
data = set(super(ExtensionValidator, cls).validate(data))
all_extensions = set(ext.name for ext in get_all_extensions())
not_found_extensions = data - all_extensions
if not_found_extensions:
raise errors.CannotFindExtension(
"No such extensions: {0}".format(
", ".join(sorted(not_found_extensions))))
return list(data)
@classmethod
def validate_delete(cls, extension_names, cluster):
not_found_extensions = extension_names - set(cluster.extensions)
if not_found_extensions:
raise errors.CannotFindExtension(
"No such extensions to disable: {0}".format(
", ".join(sorted(not_found_extensions))))
return list(extension_names)

View File

@ -1,18 +0,0 @@
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun.api.v1.validators.json_schema \
import cluster as cluster_schema
from nailgun.api.v1.validators.json_schema \
import node as node_schema

View File

@ -1,42 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun import consts
#: JSON schema for ActionLog
schema = {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "ActionLog",
"description": "Serialized ActionLog object",
"type": "object",
"properties": {
"id": {"type": "number"},
"actor_id": {"type": ["string", "null"]},
"action_group": {"type": "string"},
"action_name": {"type": "string"},
"action_type": {
"type": "string",
"enum": list(consts.ACTION_TYPES)
},
"start_timestamp": {"type": "string"},
"end_timestamp": {"type": "string"},
"additional_info": {"type": "object"},
"is_sent": {"type": "boolean"},
"cluster_id": {"type": ["number", "null"]},
"task_uuid": {"type": ["string", "null"]}
}
}

View File

@ -1,53 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2013 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
assignment_format_schema = {
'$schema': 'http://json-schema.org/draft-04/schema#',
'title': 'assignment',
'description': 'assignment map, node ids to arrays of roles',
'type': 'array',
'items': {
'type': 'object',
'properties': {
'id': {
'description': 'The unique identifier for id',
'type': 'integer'
},
'roles': {
'type': 'array',
'items': {'type': 'string'}
}
},
'required': ['id', 'roles'],
}
}
unassignment_format_schema = {
'$schema': 'http://json-schema.org/draft-04/schema#',
'title': 'unassignment',
'description': 'List with node ids for unassignment',
'type': 'array',
'items': {
'type': 'object',
'properties': {
'id': {
'description': 'The unique identifier for id',
'type': 'integer'
}
}
}
}

View File

@ -1,344 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# Common json schema types definition
from nailgun import consts
NULL = {
'type': 'null'
}
NULLABLE_STRING = {
'type': ['string', 'null']
}
NULLABLE_BOOL = {
'type': ['boolean', 'null']
}
POSITIVE_INTEGER = {
'type': 'integer',
'minimum': 1
}
POSITIVE_NUMBER = {
'type': 'number',
'minimum': 0,
'exclusiveMinimum': True
}
NULLABLE_POSITIVE_INTEGER = {
'anyOf': [POSITIVE_INTEGER, NULL]
}
NULLABLE_ENUM = lambda e: {
"enum": e + [None]
}
NON_NEGATIVE_INTEGER = {
'type': 'integer',
'minimum': 0
}
NULLABLE_NON_NEGATIVE_INTEGER = {
'anyOf': [NON_NEGATIVE_INTEGER, NULL]
}
ID = POSITIVE_INTEGER
IDS_ARRAY = {
'type': 'array',
'items': ID,
}
STRINGS_ARRAY = {
'type': 'array',
'items': {'type': 'string'}
}
NULLABLE_ID = {
'anyOf': [ID, NULL]
}
IP_ADDRESS = {
'type': 'string',
'anyOf': [
{'format': 'ipv4'},
{'format': 'ipv6'},
]
}
IP_ADDRESS_RANGE = {
"type": "array",
"minItems": 2,
"maxItems": 2,
"items": IP_ADDRESS
}
IP_ADDRESS_LIST = {
"type": "array",
"minItems": 1,
"uniqueItems": True,
"items": IP_ADDRESS
}
NULLABLE_IP_ADDRESS = {
'anyOf': [IP_ADDRESS, NULL]
}
NULLABLE_IP_ADDRESS_RANGE = {
'anyOf': [IP_ADDRESS_RANGE, NULL]
}
NET_ADDRESS = {
'type': 'string',
# check for valid ip address and route prefix
# e.g: 192.168.0.0/24
'pattern': '^(({octet}\.){{3}}{octet})({prefix})?$'.format(
octet='(2(5[0-5]|[0-4][0-9])|[01]?[0-9][0-9]?)',
prefix='/(3[012]|[12]?[0-9])'
),
}
NULLABLE_NET_ADDRESS = {
'anyOf': [NET_ADDRESS, NULL]
}
MAC_ADDRESS = {
'type': 'string',
'pattern': '^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})$',
}
NULLABLE_MAC_ADDRESS = {
'anyOf': [MAC_ADDRESS, NULL]
}
FQDN = {
'type': 'string',
'pattern': '^({label}\.)*{label}$'.format(
label='[a-zA-Z0-9]([a-zA-Z0-9\\-]{0,61}[a-zA-Z0-9])?'
)
}
ERROR_RESPONSE = {
'$schema': 'http://json-schema.org/draft-04/schema#',
'title': 'Error response',
'type': 'object',
'properties': {
'error_code': {'type': 'string'},
'message': {'type': 'string'}
},
'required': ['error_code', 'message']
}
_FULL_RESTRICTION = {
"type": "object",
"required": ["condition"],
"properties": {
"condition": {"type": "string"},
"message": {"type": "string"},
"action": {"type": "string"}}}
# restriction can be specified as one item dict, with condition as a key
# and value as a message
_SHORT_RESTRICTION = {
"type": "object",
"minProperties": 1,
"maxProperties": 1}
RESTRICTIONS = {
"type": "array",
"minItems": 1,
"items": {"anyOf": [{"type": "string"}, _FULL_RESTRICTION,
_SHORT_RESTRICTION]}
}
UI_SETTINGS = {
"type": "object",
"required": [
"view_mode",
"filter",
"sort",
"filter_by_labels",
"sort_by_labels",
"search"
],
"additionalProperties": False,
"properties": {
"view_mode": {
"type": "string",
"description": "View mode of cluster nodes",
"enum": list(consts.NODE_VIEW_MODES),
},
"filter": {
"type": "object",
"description": ("Filters applied to node list and "
"based on node attributes"),
"properties": dict(
(key, {"type": "array"}) for key in consts.NODE_LIST_FILTERS
),
},
"sort": {
"type": "array",
"description": ("Sorters applied to node list and "
"based on node attributes"),
# TODO(@jkirnosova): describe fixed list of possible node sorters
"items": [
{"type": "object"},
],
},
"filter_by_labels": {
"type": "object",
"description": ("Filters applied to node list and "
"based on node custom labels"),
},
"sort_by_labels": {
"type": "array",
"description": ("Sorters applied to node list and "
"based on node custom labels"),
"items": [
{"type": "object"},
],
},
"search": {
"type": "string",
"description": "Search value applied to node list",
},
}
}
ATTRIBUTE_SCHEMA = {
'$schema': 'http://json-schema.org/draft-04/schema#',
'title': 'Schema for single editable attribute',
'type': 'object',
'properties': {
'type': {
'enum': [
'checkbox',
'custom_repo_configuration',
'hidden',
'password',
'radio',
'select',
'text',
'textarea',
'file',
'text_list',
'textarea_list',
'custom_hugepages',
'number',
]
},
# 'value': None, # custom validation depending on type
'restrictions': RESTRICTIONS,
'weight': {
'type': 'integer',
'minimum': 0,
},
},
'required': ['type', 'value'],
}
# Schema with allowed values for 'radio' and 'select' attribute types
ALLOWED_VALUES_SCHEMA = {
'value': {
'type': 'string',
},
'values': {
'type': 'array',
'minItems': 1,
'items': [
{
'type': 'object',
'properties': {
'data': {'type': 'string'},
'label': {'type': 'string'},
'description': {'type': 'string'},
'restrictions': RESTRICTIONS,
},
'required': ['data', 'label'],
},
],
},
}
# Schema with a structure of multiple text fields setting value
MULTIPLE_TEXT_FIELDS_SCHEMA = {
'value': {
'type': 'array',
'minItems': 0,
'items': {'type': 'string'},
},
'min': {
'type': 'integer',
'minimum': 0,
},
'max': {
'type': 'integer',
'minimum': 0,
}
}
# Additional properties definitions for 'attirbute_schema'
# depending on 'type' property
ATTRIBUTE_TYPE_SCHEMAS = {
'checkbox': {'value': {'type': 'boolean'}},
'custom_repo_configuration': {
'value': {
'type': 'array',
'minItems': 1,
'items': [
{
'type': 'object',
'properties': {
'name': {'type': 'string'},
'priority': {'type': ['integer', 'null']},
'section': {'type': 'string'},
'suite': {'type': 'string'},
'type': {'type': 'string'},
'uri': {'type': 'string'},
}
}
],
},
},
'password': {'value': {'type': 'string'}},
'radio': ALLOWED_VALUES_SCHEMA,
'select': ALLOWED_VALUES_SCHEMA,
'text': {
'value': NULLABLE_STRING,
'nullable': {'type': 'boolean'}
},
'textarea': {'value': {'type': 'string'}},
'text_list': MULTIPLE_TEXT_FIELDS_SCHEMA,
'textarea_list': MULTIPLE_TEXT_FIELDS_SCHEMA,
'custom_hugepages': {
'value': {
'type': 'object',
'properties': {
size: NON_NEGATIVE_INTEGER
for size, _ in consts.HUGE_PAGES_SIZE_MAP
}
}
},
'number': {
'value': NULLABLE_NON_NEGATIVE_INTEGER,
'nullable': {'type': 'boolean'}
},
}

View File

@ -1,66 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nailgun import consts
from nailgun.api.v1.validators.json_schema import base_types
COMPONENTS_TYPES_STR = '|'.join(
['hypervisor', 'network', 'storage', 'additional_service'])
COMPONENT_NAME_PATTERN = \
'^({0}):([0-9a-z_-]+:)*[0-9a-z_-]+$'.format(COMPONENTS_TYPES_STR)
# TODO(@ikalnitsky): add `required` properties to all needed objects
single_schema = {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Cluster",
"description": "Serialized Cluster object",
"type": "object",
"properties": {
"id": {"type": "number"},
"name": {"type": "string"},
"mode": {
"type": "string",
"enum": list(consts.CLUSTER_MODES)
},
"status": {
"type": "string",
"enum": list(consts.CLUSTER_STATUSES)
},
"ui_settings": base_types.UI_SETTINGS,
"release": {"type": "integer"},
"release_id": {"type": "integer"},
"replaced_deployment_info": {"type": "object"},
"replaced_provisioning_info": {"type": "object"},
"is_customized": {"type": "boolean"},
"fuel_version": {"type": "string"},
"components": {
'type': 'array',
'items': [{
'type': 'string',
'pattern': COMPONENT_NAME_PATTERN}]
}
}
}
collection_schema = {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Cluster collection",
"description": "Serialized Cluster collection",
"type": "object",
"items": single_schema["properties"]
}

Some files were not shown because too many files have changed in this diff Show More