Retire repository

Fuel repositories are all retired in openstack namespace, retire
remaining fuel repos in x namespace since they are unused now.

This change removes all content from the repository and adds the usual
README file to point out that the repository is retired following the
process from
https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project

See also
http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011675.html

A related change is: https://review.opendev.org/699752 .

Change-Id: I016c6c7263df9aefc9758d7f58168ec595894726
This commit is contained in:
Andreas Jaeger 2019-12-18 19:42:32 +01:00
parent 3be87477ca
commit 47efe1f1e3
98 changed files with 10 additions and 9858 deletions

4
.gitignore vendored
View File

@ -1,4 +0,0 @@
.tox
.build
_build
*.pyc

4
.gitmodules vendored
View File

@ -1,4 +0,0 @@
[submodule "plugin_test/fuel-qa"]
path = plugin_test/fuel-qa
url = https://git.openstack.org/openstack/fuel-qa
branch = stable/mitaka

202
LICENSE
View File

@ -1,202 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,43 +0,0 @@
#!/usr/bin/env ruby
#^syntax detection
# See https://github.com/bodepd/librarian-puppet-simple for additional docs
#
# Important information for fuel-library:
# With librarian-puppet-simple you *must* remove the existing folder from the
# repo prior to trying to run librarian-puppet as it will not remove the folder
# for you and you may run into some errors.
# Pull in puppetlabs-stdlib
mod 'stdlib',
:git => 'https://github.com/fuel-infra/puppetlabs-stdlib.git',
:ref => '4.9.0'
# Pull in puppetlabs-inifile
mod 'inifile',
:git => 'https://github.com/fuel-infra/puppetlabs-inifile.git',
:ref => '1.4.2'
# Pull in puppet-neutron
mod 'neutron',
:git => 'https://github.com/fuel-infra/puppet-neutron.git',
:ref => 'stable/mitaka'
# Pull in puppet-openstacklib
mod 'openstacklib',
:git => 'https://github.com/fuel-infra/puppet-openstacklib.git',
:ref => 'stable/mitaka'
# Pull in puppetlabs-firewall
mod 'firewall',
:git => 'https://github.com/fuel-infra/puppetlabs-firewall.git',
:ref => '1.8.0'
# Pull in puppet-keystone
mod 'keystone',
:git => 'https://github.com/fuel-infra/puppet-keystone.git',
:ref => 'stable/mitaka'
# Pull in puppet-nova
mod 'nova',
:git => 'https://github.com/fuel-infra/puppet-nova.git',
:ref => 'stable/mitaka'

View File

@ -1,5 +0,0 @@
Fuel NSX-T plugin
=================
The plugin allows Fuel deployment engineers install OpenStack that will use
VMware NSX Transformers as network backend for Neutron.

10
README.rst Normal file
View File

@ -0,0 +1,10 @@
This project is no longer maintained.
The contents of this repository are still available in the Git
source code management system. To see the contents of this
repository before it reached its end of life, please check out the
previous commit with "git checkout HEAD^1".
For any further questions, please email
openstack-discuss@lists.openstack.org or join #openstack-dev on
Freenode.

View File

@ -1,15 +0,0 @@
- name: network:neutron:core:nsx
label: "Neutron with NSX-T plugin"
description: "NSX Transformers uses STT tunneling protocol. NSX must be up and running before OpenStack deployment!"
bind: !!pairs
- "cluster:net_segment_type": "tun"
compatible:
- name: "hypervisor:vmware"
- name: "hypervisor:qemu"
- name: "storage:block:lvm"
- name: "storage:image:ceph"
- name: "storage:object:ceph"
requires: []
incompatible:
- name: "additional_service:ironic"

View File

@ -1,18 +0,0 @@
notice('fuel-plugin-nsx-t: compute_nova_config.pp')
include ::nova::params
nova_config {
'neutron/service_metadata_proxy': value => 'True';
'neutron/ovs_bridge': value => 'nsx-managed';
}
service { 'nova-compute':
ensure => running,
name => $::nova::params::compute_service_name,
enable => true,
hasstatus => true,
hasrestart => true,
}
Nova_config<| |> ~> Service['nova-compute']

View File

@ -1,48 +0,0 @@
notice('fuel-plugin-nsx-t: compute_vmware_nova_config.pp')
include ::nova::params
$neutron_config = hiera_hash('neutron_config')
$neutron_metadata_proxy_secret = $neutron_config['metadata']['metadata_proxy_shared_secret']
$management_vip = hiera('management_vip')
$service_endpoint = hiera('service_endpoint', $management_vip)
$ssl_hash = hiera_hash('use_ssl', {})
$neutron_username = pick($neutron_config['keystone']['admin_user'], 'neutron')
$neutron_password = $neutron_config['keystone']['admin_password']
$neutron_tenant_name = pick($neutron_config['keystone']['admin_tenant'], 'services')
$region = hiera('region', 'RegionOne')
$admin_identity_protocol = get_ssl_property($ssl_hash, {}, 'keystone', 'admin', 'protocol', 'http')
$admin_identity_address = get_ssl_property($ssl_hash, {}, 'keystone', 'admin', 'hostname', [$service_endpoint, $management_vip])
$neutron_internal_protocol = get_ssl_property($ssl_hash, {}, 'neutron', 'internal', 'protocol', 'http')
$neutron_endpoint = get_ssl_property($ssl_hash, {}, 'neutron', 'internal', 'hostname', [hiera('neutron_endpoint', ''), $management_vip])
$auth_api_version = 'v3'
$admin_identity_uri = "${admin_identity_protocol}://${admin_identity_address}:35357"
$neutron_auth_url = "${admin_identity_uri}/${auth_api_version}"
$neutron_url = "${neutron_internal_protocol}://${neutron_endpoint}:9696"
class {'nova::network::neutron':
neutron_password => $neutron_password,
neutron_project_name => $neutron_tenant_name,
neutron_region_name => $region,
neutron_username => $neutron_username,
neutron_auth_url => $neutron_auth_url,
neutron_url => $neutron_url,
neutron_ovs_bridge => '',
}
nova_config {
'neutron/service_metadata_proxy': value => 'True';
'neutron/metadata_proxy_shared_secret': value => $neutron_metadata_proxy_secret;
}
service { 'nova-compute':
ensure => running,
name => $::nova::params::compute_service_name,
enable => true,
hasstatus => true,
hasrestart => true,
}
Class['nova::network::neutron'] ~> Service['nova-compute']
Nova_config<| |> ~> Service['nova-compute']

View File

@ -1,20 +0,0 @@
notice('fuel-plugin-nsx-t: configure-agents-dhcp.pp')
neutron_dhcp_agent_config {
'DEFAULT/ovs_integration_bridge': value => 'nsx-managed';
'DEFAULT/interface_driver': value => 'neutron.agent.linux.interface.OVSInterfaceDriver';
'DEFAULT/enable_metadata_network': value => true;
'DEFAULT/enable_isolated_metadata': value => true;
'DEFAULT/ovs_use_veth': value => true;
}
if 'primary-controller' in hiera('roles') {
exec { 'dhcp-agent-restart':
command => "crm resource restart $(crm status|awk '/dhcp/ {print \$3}')",
path => '/usr/bin:/usr/sbin:/bin:/sbin',
logoutput => true,
provider => 'shell',
tries => 3,
try_sleep => 10,
}
}

View File

@ -1,62 +0,0 @@
notice('fuel-plugin-nsx-t: configure-plugin.pp')
include ::nsxt::params
file { $::nsxt::params::nsx_plugin_dir:
ensure => directory,
}
file { $::nsxt::params::nsx_plugin_config:
ensure => present,
content => template('nsxt/nsx.ini')
}
$settings = hiera($::nsxt::params::hiera_key)
$managers = $settings['nsx_api_managers']
$user = $settings['nsx_api_user']
$password = $settings['nsx_api_password']
$overlay_tz = $settings['default_overlay_tz_uuid']
$vlan_tz = $settings['default_vlan_tz_uuid']
$tier0_router = $settings['default_tier0_router_uuid']
$edge_cluster = $settings['default_edge_cluster_uuid']
nsx_config {
'nsx_v3/nsx_api_managers': value => $managers;
'nsx_v3/nsx_api_user': value => $user;
'nsx_v3/nsx_api_password': value => $password;
'nsx_v3/default_overlay_tz_uuid': value => $overlay_tz;
'nsx_v3/default_vlan_tz_uuid': value => $vlan_tz;
'nsx_v3/default_tier0_router_uuid': value => $tier0_router;
'nsx_v3/default_edge_cluster_uuid': value => $edge_cluster;
}
file { '/etc/neutron/plugin.ini':
ensure => link,
target => $::nsxt::params::nsx_plugin_config,
replace => true,
require => File[$::nsxt::params::nsx_plugin_dir]
}
if !$settings['insecure'] {
nsx_config { 'nsx_v3/insecure': value => $settings['insecure']; }
$ca_filename = try_get_value($settings['ca_file'],'name','')
if !empty($ca_filename) {
$ca_certificate_content = $settings['ca_file']['content']
$ca_file = "${::nsxt::params::nsx_plugin_dir}/${ca_filename}"
nsx_config { 'nsx_v3/ca_file': value => $ca_file; }
file { $ca_file:
ensure => present,
content => $ca_certificate_content,
require => File[$::nsxt::params::nsx_plugin_dir],
}
}
}
File[$::nsxt::params::nsx_plugin_dir]->
File[$::nsxt::params::nsx_plugin_config]->
Nsx_config<||>

View File

@ -1,14 +0,0 @@
notice('fuel-plugin-nsx-t: create-repo.pp')
include ::nsxt::params
$settings = hiera($::nsxt::params::hiera_key)
$managers = $settings['nsx_api_managers']
$username = $settings['nsx_api_user']
$password = $settings['nsx_api_password']
class { '::nsxt::create_repo':
managers => $managers,
username => $username,
password => $password,
}

View File

@ -1,6 +0,0 @@
notice('fuel-plugin-nsx-t: gem-install.pp')
# ruby gem package must be pre installed before puppet module used
package { ['ruby-json', 'ruby-rest-client']:
ensure => latest,
}

View File

@ -1,7 +0,0 @@
notice('fuel-plugin-nsx-t: hiera-override.pp')
include ::nsxt::params
class { '::nsxt::hiera_override':
override_file_name => $::nsxt::params::hiera_key,
}

View File

@ -1,33 +0,0 @@
notice('fuel-plugin-nsx-t: install-nsx-packages.pp')
$nsx_required_packages = ['libunwind8', 'zip', 'libgflags2', 'libgoogle-perftools4', 'traceroute',
'python-mako', 'python-simplejson', 'python-support', 'python-unittest2',
'python-yaml', 'python-netaddr', 'libprotobuf8',
'libboost-filesystem1.54.0', 'dkms', 'libboost-chrono-dev',
'libboost-iostreams1.54.0', 'libvirt0']
$nsx_packages = ['libgoogle-glog0', 'libjson-spirit', 'nicira-ovs-hypervisor-node', 'nsxa',
'nsx-agent', 'nsx-aggservice', 'nsx-cli', 'nsx-da', 'nsx-host',
'nsx-host-node-status-reporter', 'nsx-lldp', 'nsx-logical-exporter', 'nsx-mpa',
'nsx-netcpa', 'nsx-sfhc', 'nsx-transport-node-status-reporter',
'openvswitch-common', 'openvswitch-datapath-dkms', 'openvswitch-pki',
'openvswitch-switch', 'python-openvswitch', 'tcpdump-ovs']
package { $nsx_required_packages:
ensure => latest,
}
package { $nsx_packages:
ensure => latest,
require => [Package[$nsx_required_packages],Service['openvswitch-switch']]
}
service { 'openvswitch-switch':
ensure => stopped,
enable => false,
}
# This not shell(ubuntu dash) script, this bash script.
# if you leave it there all the command like '/bin/sh -c' cannot be executed
# example: start galera via pacemaker
file { '/etc/profile.d/nsx-alias.sh':
ensure => absent,
require => Package[$nsx_packages],
}

View File

@ -1,8 +0,0 @@
notice('fuel-plugin-nsx-t: install-nsx-plugin.pp')
include ::nsxt::params
package { $::nsxt::params::plugin_package:
ensure => present,
}

View File

@ -1,66 +0,0 @@
notice('fuel-plugin-nsx-t: neutron-network-create.pp')
include ::nsxt::params
$access_hash = hiera_hash('access',{})
$neutron_config = hiera_hash('neutron_config')
$floating_net = try_get_value($neutron_config, 'default_floating_net', 'net04_ext')
$internal_net = try_get_value($neutron_config, 'default_private_net', 'net04')
$os_tenant_name = try_get_value($access_hash, 'tenant', 'admin')
$settings = hiera($::nsxt::params::hiera_key)
$floating_ip_range = split($settings['floating_ip_range'], '-')
$floating_ip_range_start = $floating_ip_range[0]
$floating_ip_range_end = $floating_ip_range[1]
$floating_net_allocation_pool = "start=${floating_ip_range_start},end=${floating_ip_range_end}"
$floating_net_cidr = $settings['floating_net_cidr']
$floating_net_gw = $settings['floating_net_gw']
$default_floating_net_gw = regsubst($floating_net_cidr,'^(\d+\.\d+\.\d+)\.\d+/\d+$','\1.1')
$skip_provider_network = hiera('skip_provider_network', false)
if ! $skip_provider_network {
neutron_network { $floating_net :
ensure => 'present',
provider_physical_network => $settings['external_network'],
provider_network_type => 'local',
router_external => true,
tenant_name => $os_tenant_name,
shared => true,
}
neutron_subnet { "${floating_net}__subnet" :
ensure => 'present',
cidr => $floating_net_cidr,
network_name => $floating_net,
tenant_name => $os_tenant_name,
gateway_ip => pick($floating_net_gw, $default_floating_net_gw),
enable_dhcp => false,
allocation_pools => $floating_net_allocation_pool,
require => Neutron_network[$floating_net],
}
skip_provider_network($::nsxt::params::hiera_yml)
}
$internal_net_dns = split($settings['internal_net_dns'], ',')
$internal_net_cidr = $settings['internal_net_cidr']
neutron_network { $internal_net :
ensure => 'present',
provider_physical_network => false,
router_external => false,
tenant_name => $os_tenant_name,
shared => true,
}
neutron_subnet { "${internal_net}__subnet" :
ensure => 'present',
cidr => $internal_net_cidr,
network_name => $internal_net,
tenant_name => $os_tenant_name,
gateway_ip => regsubst($internal_net_cidr,'^(\d+\.\d+\.\d+)\.\d+/\d+$','\1.1'),
enable_dhcp => true,
dns_nameservers => pick($internal_net_dns,[]),
require => Neutron_network[$internal_net],
}

View File

@ -1,69 +0,0 @@
notice('fuel-plugin-nsx-t: neutron-server-start.pp')
include ::neutron::params
service { 'neutron-server-start':
ensure => 'running',
name => $::neutron::params::server_service,
enable => true,
hasstatus => true,
hasrestart => true,
}
include ::nsxt::params
neutron_config {
'DEFAULT/core_plugin': value => $::nsxt::params::core_plugin;
'DEFAULT/service_plugins': ensure => absent;
'service_providers/service_provider': ensure => absent;
}
Neutron_config<||> ~> Service['neutron-server']
if 'primary-controller' in hiera('roles') {
include ::neutron::db::sync
Exec['neutron-db-sync'] ~> Service['neutron-server-start']
Neutron_config<||> ~> Exec['neutron-db-sync']
$neutron_config = hiera_hash('neutron_config')
$management_vip = hiera('management_vip')
$service_endpoint = hiera('service_endpoint', $management_vip)
$ssl_hash = hiera_hash('use_ssl', {})
$internal_auth_protocol = get_ssl_property($ssl_hash, {}, 'keystone', 'internal', 'protocol', 'http')
$internal_auth_address = get_ssl_property($ssl_hash, {}, 'keystone', 'internal', 'hostname', [$service_endpoint])
$identity_uri = "${internal_auth_protocol}://${internal_auth_address}:5000"
$auth_api_version = 'v2.0'
$auth_url = "${identity_uri}/${auth_api_version}"
$auth_password = $neutron_config['keystone']['admin_password']
$auth_user = pick($neutron_config['keystone']['admin_user'], 'neutron')
$auth_tenant = pick($neutron_config['keystone']['admin_tenant'], 'services')
$auth_region = hiera('region', 'RegionOne')
$auth_endpoint_type = 'internalURL'
exec { 'waiting-for-neutron-api':
environment => [
"OS_TENANT_NAME=${auth_tenant}",
"OS_USERNAME=${auth_user}",
"OS_PASSWORD=${auth_password}",
"OS_AUTH_URL=${auth_url}",
"OS_REGION_NAME=${auth_region}",
"OS_ENDPOINT_TYPE=${auth_endpoint_type}",
],
path => '/usr/sbin:/usr/bin:/sbin:/bin',
tries => '30',
try_sleep => '15',
command => 'neutron net-list --http-timeout=4 2>&1 > /dev/null',
provider => 'shell',
subscribe => Service['neutron-server'],
refreshonly => true,
}
}
# fix add plugin.ini conf for neutron server
exec { 'fix-plugin-ini':
path => '/usr/sbin:/usr/bin:/sbin:/bin',
command => 'sed -ri \'s|NEUTRON_PLUGIN_CONFIG=""|NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugin.ini"|\' /usr/share/neutron-common/plugin_guess_func',
provider => 'shell',
before => Service['neutron-server'],
}

View File

@ -1,8 +0,0 @@
notice('fuel-plugin-nsx-t: neutron-server-stop.pp')
include ::neutron::params
service { 'neutron-server-stop':
ensure => 'stopped',
name => $::neutron::params::server_service,
}

View File

@ -1,68 +0,0 @@
notice('fuel-plugin-nsx-t: reg-node-as-transport-node.pp')
include ::nsxt::params
$settings = hiera($::nsxt::params::hiera_key)
$managers = $settings['nsx_api_managers']
$user = $settings['nsx_api_user']
$password = $settings['nsx_api_password']
$uplink_profile_uuid = $settings['uplink_profile_uuid']
$transport_zone_uuid = $settings['default_overlay_tz_uuid']
if 'primary-controller' in hiera('roles') or 'controller' in hiera('roles') {
$pnics = $settings['controller_pnics_pairs']
$static_ip_pool_uuid = $settings['controller_ip_pool_uuid']
} else {
$pnics = $settings['compute_pnics_pairs']
$static_ip_pool_uuid = $settings['compute_ip_pool_uuid']
}
$vtep_interfaces = get_interfaces($pnics)
up_interface { $vtep_interfaces:
before => Nsxt_create_transport_node['Add transport node'],
}
firewall {'0000 Accept STT traffic':
proto => 'tcp',
dport => ['7471'],
action => 'accept',
before => Nsxt_create_transport_node['Add transport node'],
}
if !$settings['insecure'] {
$ca_filename = try_get_value($settings['ca_file'],'name','')
if empty($ca_filename) {
# default path to ca for Ubuntu 14.0.4
$ca_file = '/etc/ssl/certs/ca-certificates.crt'
} else {
$ca_file = "${::nsxt::params::nsx_plugin_dir}/${ca_filename}"
}
Nsxt_create_transport_node { ca_file => $ca_file }
}
nsxt_create_transport_node { 'Add transport node':
ensure => present,
managers => $managers,
username => $user,
password => $password,
uplink_profile_id => $uplink_profile_uuid,
pnics => $pnics,
static_ip_pool_id => $static_ip_pool_uuid,
transport_zone_id => $transport_zone_uuid,
}
# workaround, otherwise $title variable not work, always has a value 'main'
define up_interface {
file { $title:
ensure => file,
path => "/etc/network/interfaces.d/ifcfg-${title}",
mode => '0644',
content => "auto ${title}\niface ${title} inet manual",
replace => true,
} ->
exec { $title:
path => '/usr/sbin:/usr/bin:/sbin:/bin',
command => "ifup ${title}",
provider => 'shell',
}
}

View File

@ -1,32 +0,0 @@
notice('fuel-plugin-nsx-t: reg-node-on-management-plane.pp')
include ::nsxt::params
$settings = hiera($::nsxt::params::hiera_key)
$managers = $settings['nsx_api_managers']
$user = $settings['nsx_api_user']
$password = $settings['nsx_api_password']
nsxt_add_to_fabric { 'Register controller node on management plane':
ensure => present,
managers => $managers,
username => $user,
password => $password,
}
if !$settings['insecure'] {
$ca_filename = try_get_value($settings['ca_file'],'name','')
if empty($ca_filename) {
# default path to ca for Ubuntu 14.0.4
$ca_file = '/etc/ssl/certs/ca-certificates.crt'
} else {
$ca_file = "${::nsxt::params::nsx_plugin_dir}/${ca_filename}"
}
Nsxt_add_to_fabric { ca_file => $ca_file }
}
service { 'openvswitch-switch':
ensure => 'running'
}
Nsxt_add_to_fabric<||> -> Service['openvswitch-switch']

View File

@ -1,13 +0,0 @@
#!/bin/bash -e
repo_dir=$1
component_archive=$2
mkdir -p "$repo_dir"
cd "$repo_dir"
tar --wildcards --strip-components=1 -zxvf "$component_archive" "*/"
dpkg-scanpackages . /dev/null | gzip -9c > Packages.gz
echo 'Label: nsx-t-protected-packages' > Release
chmod 755 .
chmod 644 ./*
apt-get update
rm -fr "${component_archive:?}"

View File

@ -1,3 +0,0 @@
Package: *
Pin: release l=nsx-t-protected-packages
Pin-Priority: 9000

View File

@ -1,14 +0,0 @@
module Puppet::Parser::Functions
newfunction(:get_interfaces, :type => :rvalue, :doc => <<-EOS
Returns the array of interface names for nsx-t VTEPs.
EOS
) do |args|
pnics = args[0]
vtep_interfaces = []
pnics.each_line do |pnic_pair|
device,uplink = pnic_pair.split(':')
vtep_interfaces.push(device.strip)
end
return vtep_interfaces
end
end

View File

@ -1,163 +0,0 @@
require 'rest-client'
require 'json'
require 'openssl'
require 'open-uri'
module Puppet::Parser::Functions
newfunction(:get_nsxt_components, :type => :rvalue, :doc => <<-EOS
Returns the address of nsx-t manager, on which enable install-upgrade service
example:
get_nsxt_components('172.16.0.1,172.16.0.2,172.16.0.3', username, password)
EOS
) do |args|
managers = args[0]
username = args[1]
password = args[2]
managers.split(',').each do |manager|
# Suppression scheme, NSX-T 1.0 supports only https scheme
manager.to_s.strip =~ /(https?:\/\/)?(?<manager>.+)/
manager = Regexp.last_match[:manager]
service_enabled = check_service_enabled(manager, username, password)
if service_enabled == 'error'
next
elsif service_enabled == 'disabled'
service_enabled_on_manager = enable_upgrade_service(manager, username, password)
else
service_enabled_on_manager = service_enabled
end
if check_service_running(service_enabled_on_manager, username, password)
return get_component(service_enabled_on_manager, username, password)
else
service_enabled_on_manager = enable_upgrade_service(service_enabled_on_manager, username, password)
if check_service_running(service_enabled_on_manager, username, password)
return get_component(service_enabled_on_manager, username, password)
end
end
raise Puppet::Error,("\nCan not enable install-upgrade service on nsx-t manager\n")
end
end
end
def disable_upgrade_service(manager, username, password)
debug("Try disable install-upgrade service on #{manager}")
request = {'service_name' => 'install-upgrade', 'service_properties' => {'enabled' => false }}
api_url = "https://#{manager}/api/v1/node/services/install-upgrade"
response = nsxt_api(api_url, username, password, 'put', request.to_json)
debug("response:\n #{response}")
if response['service_properties']['enabled'] == false
return
end
raise Puppet::Error,("\nCannot disable install-upgrade service on nsx-t manager #{manager}\n")
end
def get_component(manager, username, password)
file_path = '/tmp/nsxt-components.tgz'
component_url = get_component_url(manager, username, password)
begin
File.open(file_path, 'wb') do |saved_file|
open(component_url, 'rb') do |read_file|
saved_file.write(read_file.read)
end
end
rescue => error
raise Puppet::Error,("\nCan not get file from #{url}:\n#{error.message}\n")
end
disable_upgrade_service(manager, username, password)
return file_path
end
def get_component_url(manager, username, password)
node_version = get_node_version(manager, username, password)
begin
manifest = open("http://#{manager}:8080/repository/#{node_version}/metadata/manifest").read
rescue => error
raise Puppet::Error,("\nCan not get url for nsx-t components from #{url}:\n#{error.message}\n")
end
manifest.split(/\n/).each do |str|
if str.include? 'NSX_HOST_COMPONENT_UBUNTU_1404_TAR'
url = str.split('=')[1]
if node_version.start_with?('1.0.0')
return "http://#{manager}:8080#{url}"
else
return "http://#{manager}:8080/repository/#{url}"
end
end
end
end
def get_node_version(manager, username, password)
debug("Try get nsx-t node version from #{manager}")
api_url = "https://#{manager}/api/v1/node"
response = nsxt_api(api_url, username, password, 'get')
debug("response:\n #{response}")
if not response.to_s.empty?
return response['node_version']
end
raise Puppet::Error,("\nCan not get node version from #{manager}\n")
end
def check_service_enabled(manager, username, password)
debug("Check install-upgrade service enabled on #{manager}")
api_url = "https://#{manager}/api/v1/node/services/install-upgrade"
response = nsxt_api(api_url, username, password, 'get')
debug("response:\n #{response}")
if not response.to_s.empty?
if response['service_properties']['enabled'] == true
return response['service_properties']['enabled_on']
end
return 'disabled'
end
return 'error'
end
def check_service_running(manager, username, password)
debug("Check install-upgrade service running on #{manager}")
api_url = "https://#{manager}/api/v1/node/services/install-upgrade/status"
response = nsxt_api(api_url, username, password, 'get')
debug("response:\n #{response}")
if not response.to_s.empty?
if response['runtime_state'] == 'running'
return true
end
end
return false
end
def enable_upgrade_service(manager, username, password)
debug("Try enable install-upgrade service on #{manager}")
request = {'service_name' => 'install-upgrade', 'service_properties' => {'enabled' => true }}
api_url = "https://#{manager}/api/v1/node/services/install-upgrade"
response = nsxt_api(api_url, username, password, 'put', request.to_json)
debug("response:\n #{response}")
if response['service_properties']['enabled'] == true
return response['service_properties']['enabled_on']
end
raise Puppet::Error,("\nCannot enable install-upgrade service on nsx-t manager #{manager}\n")
end
def nsxt_api(api_url, username, password, method, request='', timeout=5)
retry_count = 3
begin
if method == 'get'
response = RestClient::Request.execute(method: :get, url: api_url, timeout: timeout, user: username, password: password, verify_ssl: OpenSSL::SSL::VERIFY_NONE)
elsif method == 'put'
response = RestClient::Request.execute(method: :put, url: api_url, payload: request, timeout: timeout, user: username, password: password, verify_ssl: OpenSSL::SSL::VERIFY_NONE, headers: {'Content-Type' => 'application/json'})
end
response_hash = JSON.parse(response.body)
return response_hash
rescue Errno::ECONNREFUSED
notice("\nCan not get response from #{api_url} - 'Connection refused', try next if exist\n")
return ""
rescue Errno::EHOSTUNREACH
notice("\nCan not get response from #{api_url} - 'No route to host', try next if exist\n")
return ""
rescue => error
retry_count -= 1
if retry_count > 0
sleep 10
retry
else
raise Puppet::Error,("\nCan not get response from #{api_url} :\n#{error.message}\n#{JSON.parse(error.response)['error_message']}\n")
end
end
end

View File

@ -1,31 +0,0 @@
require 'yaml'
module Puppet::Parser::Functions
newfunction(:hiera_overrides, :doc => <<-EOS
Custom function to override hiera parameters, the first argument -
file name, where write new parameters in yaml format, ex:
hiera_overrides('/etc/hiera/test.yaml')
EOS
) do |args|
filename = args[0]
begin
yaml_string = File.read filename
hiera_overrides = YAML.load yaml_string
rescue Errno::ENOENT
hiera_overrides = {}
end
# override neutron_advanced_configuration
neutron_advanced_configuration = {}
neutron_advanced_configuration['neutron_dvr'] = false
neutron_advanced_configuration['neutron_l2_pop'] = false
neutron_advanced_configuration['neutron_l3_ha'] = false
neutron_advanced_configuration['neutron_qos'] = false
hiera_overrides['neutron_advanced_configuration'] = neutron_advanced_configuration
# write to hiera override yaml file
File.open(filename, 'w') { |file| file.write(hiera_overrides.to_yaml) }
end
end

View File

@ -1,24 +0,0 @@
require 'yaml'
module Puppet::Parser::Functions
newfunction(:skip_provider_network, :doc => <<-EOS
Custom function to override hiera parameters, the first argument -
file name, where write new parameters in yaml format, ex:
hiera_overrides('/etc/hiera/test.yaml')
EOS
) do |args|
filename = args[0]
begin
yaml_string = File.read filename
hiera_overrides = YAML.load yaml_string
rescue Errno::ENOENT
hiera_overrides = {}
end
hiera_overrides['skip_provider_network'] = true
# write to hiera override yaml file
File.open(filename, 'w') { |file| file.write(hiera_overrides.to_yaml) }
end
end

View File

@ -1,13 +0,0 @@
Puppet::Type.type(:nsx_config).provide(
:ini_setting,
:parent => Puppet::Type.type(:openstack_config).provider(:ini_setting)
) do
def file_path
'/etc/neutron/plugins/vmware/nsx.ini'
end
def separator
' = '
end
end

View File

@ -1,89 +0,0 @@
require File.join(File.dirname(__FILE__),'..', 'nsxtutils')
Puppet::Type.type(:nsxt_add_to_fabric).provide(:nsxt_add_to_fabric, :parent => Puppet::Provider::Nsxtutils) do
# need this for work nsxtcli method, otherwise not work
commands :nsxcli => 'nsxcli'
def create
debug("Attempting to register a node")
# need define for return error from cycle
out_reg = ''
@resource[:managers].each do |manager|
thumbprint = get_manager_thumbprint(manager, ca_file = @resource[:ca_file])
if not thumbprint.empty?
# 12 retry x 15 sleep time = 3 minutes timeout
retry_count = 12
while retry_count > 0
out_reg = nsxtcli("join management-plane #{manager} username #{@resource[:username]} thumbprint #{thumbprint} password #{@resource[:password]}")
if exists?
notice("Node added to NSX-T fabric")
return true
else
retry_count -= 1
sleep 15
end
end
end
end
raise Puppet::Error,("\nNode not add to NSX-t fabric:\n #{out_reg}\n")
end
def exists?
connected_managers = nsxtcli("get managers")
if connected_managers.include? "Connected"
node_id = get_node_id
if not node_id.empty?
@resource[:managers].each do |manager|
if check_node_registered(manager, node_id)
debug("Node '#{node_id}' connected and registered on '#{manager}'")
return true
end
end
end
end
debug("Node NOT registered on NSX-T manager")
return false
end
def destroy
debug("Attempting to unregister a node")
# need define for return error from cycle
out_unreg = ''
@resource[:managers].each do |manager|
thumbprint = get_manager_thumbprint(manager, ca_file = @resource[:ca_file])
if not thumbprint.empty?
# 12 retry x 15 sleep time = 3 minutes timeout
retry_count = 12
while retry_count > 0
out_unreg = nsxtcli("detach management-plane #{manager} username #{@resource[:username]} thumbprint #{thumbprint} password #{@resource[:password]}")
if not exists?
notice("Node deleted from NSX-T fabric")
return true
else
retry_count -= 1
sleep 15
end
end
end
end
raise Puppet::Error,("\nNode not deleted from NSX-t fabric: \n #{out_unreg}\n")
end
def check_node_registered(manager, node_id)
api_url = "https://#{manager}/api/v1/fabric/nodes/#{node_id}/state"
response = get_nsxt_api(api_url, @resource[:username], @resource[:password], @resource[:ca_file])
if not response.to_s.empty?
if response['state'] == 'success'
debug("Node '#{node_id}' registered on '#{manager}'")
return true
else
debug("Node NOT registered on '#{manager}', details:\n#{response['details']}")
end
else
debug("Node NOT registered on '#{manager}'")
end
return false
end
end

View File

@ -1,149 +0,0 @@
require 'socket'
require File.join(File.dirname(__FILE__),'..', 'nsxtutils')
Puppet::Type.type(:nsxt_create_transport_node).provide(:nsxt_create_transport_node, :parent => Puppet::Provider::Nsxtutils) do
# need this for work nsxtcli method, otherwise not work
commands :nsxcli => 'nsxcli'
def create
host_switch_profile_ids = [{'key' => 'UplinkHostSwitchProfile', 'value' => @resource[:uplink_profile_id] }]
pnics = create_pnics_array(@resource[:pnics])
node_id = get_node_id
display_name = Socket.gethostname
# the host switch name of the transport zone must match the host switch name to create transport nodes
host_switch_name = get_host_switch_name(@resource[:managers], @resource[:transport_zone_id])
request = {'display_name' => display_name,
'node_id' => node_id,
'host_switches' => [{'host_switch_name' => host_switch_name,
'static_ip_pool_id' => @resource[:static_ip_pool_id],
'host_switch_profile_ids' => host_switch_profile_ids,
'pnics' => pnics}],
'transport_zone_endpoints' => [{'transport_zone_id' => @resource[:transport_zone_id]}]
}
# puppet retrun string if provide array parametr with single element
# https://projects.puppetlabs.com/issues/9850
transport_zone_profile_ids = @resource[:transport_zone_profile_ids]
if not transport_zone_profile_ids.instance_of? Array
transport_zone_profile_ids = [transport_zone_profile_ids]
end
if not transport_zone_profile_ids[0].to_s.empty?
request['transport_zone_endpoints'].push(transport_zone_profile_ids)
end
debug("Attempting to create a transport node")
@resource[:managers].each do |manager|
api_url = "https://#{manager}/api/v1/transport-nodes"
begin
response = post_nsxt_api(api_url, @resource[:username], @resource[:password], request.to_json, @resource[:ca_file])
rescue => error
raise Puppet::Error,("\nFailed to create the transport node: #{error.message}\n")
end
if not response.to_s.empty?
# 12 retry x 15 sleep time = 3 minutes timeout
retry_count = 12
while retry_count > 0
if exists?
notice("Node '#{node_id}' added to NSX-T as transport node")
return true
else
retry_count -= 1
sleep 15
end
end
end
end
raise Puppet::Error,("\nFailed to create the transport node, status in NSX-T manager not updated\n")
end
def exists?
connected_managers = nsxtcli("get controllers")
if connected_managers.include? "connected"
node_id = get_node_id
if not node_id.empty?
@resource[:managers].each do |manager|
if check_node_lcp_connected(manager, node_id)
debug("Node '#{node_id}' connected to controllers and LCP connectivity status UP on '#{manager}'")
return true
end
end
else
raise Puppet::Error,("\nFailed to create the transport node:\nNode not registered in management plane\n")
end
end
debug("Node NOT connected to NSX-T controllers")
return false
end
def destroy
debug("Attempting to delete a transport node")
node_id = get_node_id
@resource[:managers].each do |manager|
transport_node_id = get_transport_node_id(manager, node_id, @resource[:ca_file])
if not transport_node_id.empty?
api_url = "https://#{manager}/api/v1/transport-nodes/#{transport_node_id}"
begin
response = delete_nsxt_api(api_url, @resource[:username], @resource[:password], @resource[:ca_file])
rescue => error
raise Puppet::Error,("\nFailed to delete the transport node: #{error.message}\n")
end
if response
# 12 retry x 15 sleep time = 3 minutes timeout
retry_count = 12
while retry_count > 0
if not exists?
notice("Transport node '#{node_id}' delete from NSX-T")
return true
else
retry_count -= 1
sleep 15
end
end
end
end
end
raise Puppet::Error,("\nFailed to delete the transport node.\n")
end
def check_node_lcp_connected(manager, node_id)
api_url = "https://#{manager}/api/v1/fabric/nodes/#{node_id}/status"
response = get_nsxt_api(api_url, @resource[:username], @resource[:password], @resource[:ca_file])
if not response.to_s.empty?
if response['lcp_connectivity_status'] == 'UP'
debug("Node '#{node_id}' LCP status UP on '#{manager}'")
return true
else
debug("Node LCP status '#{response['lcp_connectivity_status']}' on '#{manager}'")
if not response['lcp_connectivity_status_details'].empty?
response['lcp_connectivity_status_details'].each do |details|
debug("On #{details['control_node_ip']} status: #{details['status']} failure_status: #{details['failure_status']}")
end
end
end
else
debug("Node LCP status NOT UP on '#{manager}'")
end
return false
end
def get_host_switch_name(managers, transport_zone_id)
managers.each do |manager|
debug("Attempt to get host_switch_name for '#{transport_zone_id}' transport zone from '#{manager}' manager")
api_url = "https://#{manager}/api/v1/transport-zones/#{transport_zone_id}"
response = get_nsxt_api(api_url, @resource[:username], @resource[:password], @resource[:ca_file])
if not response.to_s.empty?
return response['host_switch_name']
end
end
raise Puppet::Error,("\nCannot get host_switch_name for '#{transport_zone_id}' transport zone.\n")
end
def create_pnics_array(pnics)
result_pnic_pairs = []
pnics.each_line do |pnic_pair|
device,uplink = pnic_pair.split(':')
result_pnic_pairs.push({'device_name' => device.strip, 'uplink_name' => uplink.strip})
end
return result_pnic_pairs
end
end

View File

@ -1,169 +0,0 @@
# require ruby-rest-client, ruby-json
require 'rest-client'
require 'json'
require 'socket'
require 'openssl'
class Puppet::Provider::Nsxtutils < Puppet::Provider
def get_nsxt_api(api_url, username, password, ca_file, timeout=5)
retry_count = 3
begin
if ca_file.to_s.empty?
response = RestClient::Request.execute(method: :get, url: api_url, timeout: timeout, user: username, password: password, verify_ssl: OpenSSL::SSL::VERIFY_NONE)
else
response = RestClient::Request.execute(method: :get, url: api_url, timeout: timeout, user: username, password: password, verify_ssl: OpenSSL::SSL::VERIFY_PEER, ssl_ca_file: ca_file)
end
response_hash = JSON.parse(response.body)
return response_hash
rescue Errno::ECONNREFUSED
notice("\nCan not get response from #{api_url} - 'Connection refused', try next if exist\n")
return ""
rescue Errno::EHOSTUNREACH
notice("\nCan not get response from #{api_url} - 'No route to host', try next if exist\n")
return ""
rescue => error
retry_count -= 1
if retry_count > 0
sleep 10
retry
else
raise Puppet::Error,("\nCan not get response from #{api_url} :\n#{error.message}\n#{JSON.parse(error.response)['error_message']}\n")
end
end
end
def post_nsxt_api(api_url, username, password, request, ca_file, timeout=5)
retry_count = 3
begin
if ca_file.to_s.empty?
response = RestClient::Request.execute(method: :post, url: api_url, payload: request, timeout: timeout, user: username, password: password, verify_ssl: OpenSSL::SSL::VERIFY_NONE,headers: {'Content-Type' => 'application/json'})
else
response = RestClient::Request.execute(method: :post, url: api_url, payload: request, timeout: timeout, user: username, password: password, verify_ssl: OpenSSL::SSL::VERIFY_PEER, ssl_ca_file: ca_file, headers: {'Content-Type' => 'application/json'})
end
response_hash = JSON.parse(response.body)
return response_hash
rescue Errno::ECONNREFUSED
notice("\nCan not get response from #{api_url} - 'Connection refused', try next if exist\n")
return ""
rescue Errno::EHOSTUNREACH
notice("\nCan not get response from #{api_url} - 'No route to host', try next if exist\n")
return ""
rescue => error
retry_count -= 1
if retry_count > 0
sleep 10
retry
else
raise Puppet::Error,("\nCan not get response from #{api_url} :\n#{error.message}\n#{JSON.parse(error.response)['error_message']}\n")
end
end
end
def delete_nsxt_api(api_url, username, password, ca_file, timeout=5)
retry_count = 3
begin
if ca_file.to_s.empty?
response = RestClient::Request.execute(method: :delete, url: api_url, timeout: timeout, user: username, password: password, verify_ssl: OpenSSL::SSL::VERIFY_NONE)
else
response = RestClient::Request.execute(method: :delete, url: api_url, timeout: timeout, user: username, password: password, verify_ssl: OpenSSL::SSL::VERIFY_PEER, ssl_ca_file: ca_file)
end
# if http code not 20x - rest client raise exception
return true
rescue Errno::ECONNREFUSED
notice("\nCan not get response from #{api_url} - 'Connection refused', try next if exist\n")
return false
rescue Errno::EHOSTUNREACH
notice("\nCan not get response from #{api_url} - 'No route to host', try next if exist\n")
return false
rescue => error
retry_count -= 1
if retry_count > 0
sleep 10
retry
else
raise Puppet::Error,("\nCan not get response from #{api_url} :\n#{error.message}\n")
end
end
end
def get_node_id
uuid = nsxtcli("get node-uuid")
if uuid =~ /\A[\da-f]{32}\z/i or uuid =~ /\A(urn:uuid:)?[\da-f]{8}-([\da-f]{4}-){3}[\da-f]{12}\z/i
return uuid
end
notice("Cannot get node uuid")
return ""
end
def get_transport_node_id(manager, node_id, ca_file)
api_url = "https://#{manager}/api/v1/transport-nodes"
response = get_nsxt_api(api_url, @resource[:username], @resource[:password], @resource[:ca_file])
if not response.to_s.empty?
response['results'].each do |node|
if node['node_id'] == node_id
return node['id']
end
end
end
notice("Cannot get transport node id")
return ""
end
def nsxtcli(cmd)
out_cli = nsxcli(['-c', cmd]).to_s.strip
debug("cmd out:\n #{out_cli}")
return out_cli
end
def get_manager_host_port(manager)
# scheme does not take into account, NSX-T 1.0 supports only https
manager =~ /(https:\/\/)?(?<host>[^:]+):?(?<port>\d+)?/
manager_host_port = {}
manager_host_port['host']= Regexp.last_match[:host]
port = Regexp.last_match[:port]
port = 443 if port.to_s.empty?
manager_host_port['port'] = port
return manager_host_port
end
def get_manager_thumbprint(manager, timeout=5, ca_file)
manager_host_port = get_manager_host_port(manager)
host = manager_host_port['host']
port = manager_host_port['port']
retry_count = 3
begin
tcp_client = TCPSocket.new(host, port, timeout)
ssl_context = OpenSSL::SSL::SSLContext.new()
if ca_file.to_s.empty?
ssl_context.verify_mode = OpenSSL::SSL::VERIFY_NONE
else
ssl_context.verify_mode = OpenSSL::SSL::VERIFY_PEER
ssl_context.ca_file = ca_file
end
ssl_client = OpenSSL::SSL::SSLSocket.new(tcp_client, ssl_context)
ssl_client.connect
cert = OpenSSL::X509::Certificate.new(ssl_client.peer_cert)
ssl_client.sysclose
tcp_client.close
tp = OpenSSL::Digest::SHA256.new(cert.to_der)
return OpenSSL::Digest::SHA256.new(cert.to_der).to_s
rescue Errno::ECONNREFUSED
notice("\nCan not get 'thumbprint' from #{host}:#{port} - 'Connection refused', try next if exist\n")
return ""
rescue Errno::EHOSTUNREACH
notice("\nCan not get 'thumbprint' from #{host}:#{port} - 'No route to host', try next if exist\n")
return ""
rescue => error
retry_count -= 1
if retry_count > 0
sleep 5
retry
else
raise Puppet::Error,("\nCan not get thumbprint from #{host}:#{port} :\n#{error.message}\n")
end
end
end
end

View File

@ -1,28 +0,0 @@
Puppet::Type::newtype(:nsx_config) do
ensurable
newparam(:name, :namevar => true) do
desc 'Section name to manage from nsx.ini'
newvalues(/\S+\/\S+/)
end
newparam(:secret, :boolean => true) do
newvalues(:true, :false)
defaultto false
end
newparam(:ensure_absent_val) do
defaultto('<DEFAULT>')
end
newproperty(:value) do
munge do |value|
value = value.to_s.strip
value
end
newvalues(/^[\S ]*$/)
end
end

View File

@ -1,37 +0,0 @@
Puppet::Type.newtype(:nsxt_add_to_fabric) do
@doc = "Add kvm node to NSX-T fabric."
ensurable
newparam(:managers) do
isnamevar
desc 'IP address of one or more NSX-T managers separated by commas.'
munge do |value|
array = []
value.split(',').each do |manager|
manager.to_s.strip =~ /(https?:\/\/)?(?<host>[^:]+):?(?<port>\d+)?/
host= Regexp.last_match[:host]
port = Regexp.last_match[:port]
port = 443 if port.to_s.empty?
# Suppression scheme, NSX-T 1.0 supports only https scheme
array.push("#{host}:#{port}")
end
value = array
end
end
newparam(:username) do
desc 'The user name for login to NSX-T manager.'
end
newparam(:password) do
desc 'The password for login to NSX-T manager.'
end
newparam(:ca_file) do
desc 'CA certificate to verify NSX-T manager certificate.'
defaultto ''
end
end

View File

@ -1,58 +0,0 @@
Puppet::Type.newtype(:nsxt_create_transport_node) do
@doc = "Create from kvm node NSX-T transport node."
ensurable
newparam(:managers) do
isnamevar
desc 'IP address of one or more NSX-T managers separated by commas.'
munge do |value|
array = []
value.split(',').each do |manager|
manager.to_s.strip =~ /(https?:\/\/)?(?<host>[^:]+):?(?<port>\d+)?/
host= Regexp.last_match[:host]
port = Regexp.last_match[:port]
port = 443 if port.to_s.empty?
# Suppression scheme, NSX-T 1.0 supports only https scheme
array.push("#{host}:#{port}")
end
value = array
end
end
newparam(:username) do
desc 'The user name for login to NSX-T manager.'
end
newparam(:password) do
desc 'The password for login to NSX-T manager.'
end
newparam(:ca_file) do
desc 'CA certificate to verify NSX-T manager certificate.'
defaultto ''
end
newparam(:uplink_profile_id) do
desc 'Ids of Uplink HostSwitch profiles to be associated with this HostSwitch.'
end
newparam(:pnics) do
desc 'Multiline string with "device_name : uplink_name" pairs.'
end
newparam(:static_ip_pool_id) do
desc 'ID of already configured Static IP Pool.'
end
newparam(:transport_zone_id) do
desc 'Transport zone ID.'
end
newparam(:transport_zone_profile_ids, :array_matching => :all) do
desc 'Array of TransportZoneProfileTypeIdEntry .'
defaultto ''
end
end

View File

@ -1,35 +0,0 @@
class nsxt::create_repo (
$managers,
$username,
$password,
$repo_dir = '/opt/nsx-t-repo',
$repo_file = '/etc/apt/sources.list.d/nsx-t-local.list',
$repo_pref_file = '/etc/apt/preferences.d/nsx-t-local.pref',
) {
$component_archive = get_nsxt_components($managers, $username, $password)
file { '/tmp/create_repo.sh':
ensure => file,
mode => '0755',
source => "puppet:///modules/${module_name}/create_repo.sh",
replace => true,
}
file { $repo_file:
ensure => file,
mode => '0644',
content => "deb file:${repo_dir} /",
replace => true,
}
file { $repo_pref_file:
ensure => file,
mode => '0644',
source => "puppet:///modules/${module_name}/pinning",
replace => true,
}
exec { 'Create repo':
path => '/usr/sbin:/usr/bin:/sbin:/bin',
command => "/tmp/create_repo.sh ${repo_dir} ${component_archive}",
provider => 'shell',
require => File['/tmp/create_repo.sh'],
}
}

View File

@ -1,6 +0,0 @@
class nsxt::hiera_override (
$override_file_name,
) {
$override_file_path = "/etc/hiera/plugins/${override_file_name}.yaml"
hiera_overrides($override_file_path)
}

View File

@ -1,8 +0,0 @@
class nsxt::params {
$hiera_key = 'nsx-t'
$hiera_yml = '/etc/hiera/plugins/nsx-t.yaml'
$plugin_package = 'python-vmware-nsx'
$core_plugin = 'vmware_nsx.plugin.NsxV3Plugin'
$nsx_plugin_dir = '/etc/neutron/plugins/vmware'
$nsx_plugin_config = '/etc/neutron/plugins/vmware/nsx.ini'
}

View File

@ -1,89 +0,0 @@
[nsx_v3]
# IP address of one or more NSX managers separated by commas.
# The IP address should be of the form:
# [<scheme>://]<ip_adress>[:<port>]
# If scheme is not provided https is used. If port is not provided
# port 80 is used for http and port 443 for https.
nsx_api_managers =
# User name of NSX Manager
nsx_api_user =
# Password of NSX Manager
nsx_api_password =
# UUID of the default NSX overlay transport zone that will be used for creating
# tunneled isolated Neutron networks. If no physical network is specified when
# creating a logical network, this transport zone will be used by default
default_overlay_tz_uuid =
# (Optional) Only required when creating VLAN or flat provider networks. UUID
# of default NSX VLAN transport zone that will be used for bridging between
# Neutron networks, if no physical network has been specified
default_vlan_tz_uuid =
# Default Edge Cluster Identifier
default_edge_cluster_uuid =
# Maximum number of times to retry API requests upon stale revision errors.
# retries = 10
# Specify a CA bundle file to use in verifying the NSX Manager
# server certificate. This option is ignored if "insecure" is set to True.
# If "insecure" is set to False and ca_file is unset, the system root CAs
# will be used to verify the server certificate.
# ca_file =
# If true, the NSX Manager server certificate is not verified. If false
# the CA bundle specified via "ca_file" will be used or if unset the
# default system root CAs will be used.
# insecure = True
# The time in seconds before aborting a HTTP connection to a NSX manager.
http_timeout = 10
# The time in seconds before aborting a HTTP read response from a NSX manager.
http_read_timeout = 180
# Maximum number of times to retry a HTTP connection.
http_retries = 3
# Maximum number of connection connections to each NSX manager.
concurrent_connections = 10
# The amount of time in seconds to wait before ensuring connectivity to
# the NSX manager if no manager connection has been used.
conn_idle_timeout = 10
# UUID of the default tier0 router that will be used for connecting to
# tier1 logical routers and configuring external networks
default_tier0_router_uuid =
# (Optional) UUID of the default NSX bridge cluster that will be used to
# perform L2 gateway bridging between VXLAN and VLAN networks. It is an
# optional field. If default bridge cluster UUID is not specified, admin will
# have to manually create a L2 gateway corresponding to a NSX Bridge Cluster
# using L2 gateway APIs. This field must be specified on one of the active
# neutron servers only.
# default_bridge_cluster_uuid =
# (Optional) The number of nested groups which are used by the plugin,
# each Neutron security-groups is added to one nested group, and each nested
# group can contain as maximum as 500 security-groups, therefore, the maximum
# number of security groups that can be created is
# 500 * number_of_nested_groups.
# The default is 8 nested groups, which allows a maximum of 4k security-groups,
# to allow creation of more security-groups, modify this figure.
# number_of_nested_groups =
# Acceptable values for 'metadata_mode' are:
# - 'access_network': this enables a dedicated connection to the metadata
# proxy for metadata server access via Neutron router.
# - 'dhcp_host_route': this enables host route injection via the dhcp agent.
# This option is only useful if running on a host that does not support
# namespaces otherwise access_network should be used.
# metadata_mode = access_network
# If True, an internal metadata network will be created for a router only when
# the router is attached to a DHCP-disabled subnet.
# metadata_on_demand = False

View File

@ -1,286 +0,0 @@
- id: nsx-t-hiera-override
version: 2.0.0
type: puppet
groups:
- primary-controller
- controller
- compute
required_for:
- netconfig
requires:
- globals
parameters:
puppet_manifest: puppet/manifests/hiera-override.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 120
- id: nsx-t-compute-vmware-nova-config
version: 2.0.0
type: puppet
groups:
- compute-vmware
required_for:
- enable_nova_compute_service
requires:
- top-role-compute-vmware
- top-role-compute
parameters:
puppet_manifest: puppet/manifests/compute-vmware-nova-config.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 180
- id: nsx-t-compute-nova-config
version: 2.0.0
type: puppet
groups:
- compute
required_for:
- enable_nova_compute_service
requires:
- top-role-compute
- openstack-network-compute-nova
parameters:
puppet_manifest: puppet/manifests/compute-nova-config.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 180
- id: nsx-t-gem-install
version: 2.0.0
type: puppet
groups:
- primary-controller
- controller
- compute
required_for:
- nsx-t-reg-node-on-management-plane
- nsx-t-reg-node-as-transport-node
requires:
- setup_repositories
parameters:
puppet_manifest: puppet/manifests/gem-install.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 300
- id: nsx-t-create-repo
version: 2.0.0
type: puppet
groups:
- primary-controller
- controller
- compute
required_for:
- netconfig
requires:
- nsx-t-gem-install
parameters:
puppet_manifest: puppet/manifests/create-repo.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 600
strategy:
type: one_by_one
- id: nsx-t-install-packages
version: 2.0.0
type: puppet
groups:
- primary-controller
- controller
- compute
required_for:
- openstack-network-start
- database
- primary-database
requires:
- netconfig
- nsx-t-create-repo
parameters:
puppet_manifest: puppet/manifests/install-nsx-packages.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 300
- id: nsx-t-install-plugin
version: 2.0.0
type: puppet
groups:
- primary-controller
- controller
required_for:
- openstack-network-end
requires:
- openstack-network-server-config
parameters:
puppet_manifest: puppet/manifests/install-nsx-plugin.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 60
- id: nsx-t-configure-plugin
version: 2.0.0
type: puppet
groups:
- primary-controller
- controller
required_for:
- openstack-network-end
requires:
- nsx-t-install-plugin
parameters:
puppet_manifest: puppet/manifests/configure-plugin.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 60
- id: nsx-t-neutron-server-stop
version: 2.0.0
type: puppet
groups:
- primary-controller
- controller
required_for:
- openstack-network-end
requires:
- openstack-network-server-config
parameters:
puppet_manifest: puppet/manifests/neutron-server-stop.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 60
- id: nsx-t-primary-neutron-server-start
version: 2.0.0
type: puppet
groups:
- primary-controller
required_for:
- primary-openstack-network-agents-metadata
- primary-openstack-network-agents-dhcp
requires:
- nsx-t-configure-plugin
cross-depends:
- name: nsx-t-neutron-server-stop
parameters:
puppet_manifest: puppet/manifests/neutron-server-start.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 300
- id: nsx-t-reg-node-on-management-plane
version: 2.0.0
type: puppet
groups:
- primary-controller
- controller
- compute
required_for:
- primary-openstack-network-agents-metadata
- primary-openstack-network-agents-dhcp
- openstack-network-end
requires:
- nsx-t-install-packages
parameters:
puppet_manifest: puppet/manifests/reg-node-on-management-plane.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 300
- id: nsx-t-reg-node-as-transport-node
version: 2.0.0
type: puppet
groups:
- primary-controller
- controller
- compute
required_for:
- primary-openstack-network-agents-metadata
- primary-openstack-network-agents-dhcp
- openstack-network-end
requires:
- nsx-t-reg-node-on-management-plane
parameters:
puppet_manifest: puppet/manifests/reg-node-as-transport-node.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 300
- id: nsx-t-neutron-server-start
version: 2.0.0
type: puppet
groups:
- controller
requires:
- nsx-t-neutron-server-stop
- nsx-t-configure-plugin
required_for:
- openstack-network-agents-metadata
- openstack-network-agents-dhcp
cross-depends:
- name: nsx-t-primary-neutron-server-start
parameters:
puppet_manifest: puppet/manifests/neutron-server-start.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 120
strategy:
type: one_by_one
- id: nsx-t-primary-configure-agents-dhcp
version: 2.0.0
type: puppet
groups:
- primary-controller
required_for:
- openstack-network-end
requires:
- primary-openstack-network-agents-dhcp
cross-depends:
- name: nsx-t-configure-agents-dhcp
parameters:
puppet_manifest: puppet/manifests/configure-agents-dhcp.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 180
- id: nsx-t-configure-agents-dhcp
version: 2.0.0
type: puppet
groups:
- controller
required_for:
- openstack-network-end
requires:
- openstack-network-agents-dhcp
parameters:
puppet_manifest: puppet/manifests/configure-agents-dhcp.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 120
- id: nsx-t-neutron-network-create
version: 2.1.0
type: puppet
groups:
- primary-controller
required_for:
- openstack-network-routers
requires:
- nsx-t-primary-neutron-server-start
- nsx-t-configure-agents-dhcp
- primary-openstack-network-agents-metadata
parameters:
puppet_manifest: puppet/manifests/neutron-network-create.pp
puppet_modules: puppet/modules:/etc/puppet/modules
timeout: 300
# skipped tasks
- id: openstack-network-networks
version: 2.0.0
type: skipped
- id: primary-openstack-network-plugins-l2
version: 2.0.0
type: skipped
- id: openstack-network-plugins-l2
version: 2.0.0
type: skipped
- id: primary-openstack-network-agents-l3
version: 2.0.0
type: skipped
- id: openstack-network-agents-l3
version: 2.0.0
type: skipped
- id: openstack-network-agents-sriov
version: 2.0.0
type: skipped
- id: enable_nova_compute_service
version: 2.0.0
type: skipped

View File

@ -1,2 +0,0 @@
docutils==0.9.1
sphinx>=1.1.2,!=1.2.0,<1.3

View File

@ -1,177 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/FuelNSXplugin.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/FuelNSXplugin.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/FuelNSXplugin"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/FuelNSXplugin"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -1,254 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [ ]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Fuel NSX-T plugin'
copyright = u'2016, Mirantis Inc.'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0.0'
# The full version, including alpha/beta/rc tags.
release = '1.0-1.0.0-1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
#exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'FuelNSXplugindoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = { 'classoptions': ',openany,oneside',
'babel': '\\usepackage[english]{babel}',
'preamble': '\setcounter{tocdepth}{3} '
'\setcounter{secnumdepth}{0}'}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'nsx-test-plan-' + version + '.tex', u'Fuel NSX-T plugin testing documentation',
u'Mirantis Inc.', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'fuelnsxplugin', u'Fuel NSX-T plugin testing documentation',
[u'Mirantis Inc.'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'FuelNSXplugin', u'Fuel NSX-T plugin testing documentation',
u'Mirantis Inc.', 'FuelNSXplugin', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# Insert footnotes where they are defined instead of at the end.
pdf_inline_footnotes = True

View File

@ -1,11 +0,0 @@
Fuel NSX-T plugin's testing documentation
========================================
Testing documents
-----------------
.. toctree::
:glob:
:maxdepth: 3
source/nsx-t_test_plan

View File

@ -1,256 +0,0 @@
=================================
Test Plan for NSX-T plugin v1.0.0
=================================
************
Introduction
************
Purpose
=======
Main purpose of this document is intended to describe Quality Assurance
activities, required to insure that Fuel plugin for VMware NSX driver is
ready for production. The project will be able to offer VMware NSX
integration functionality with MOS. The scope of this plan defines the
following objectives:
* Identify testing activities;
* Outline testing approach, test types, test cycle that will be used;
* List of metrics and deliverable elements;
* List of items for testing and out of testing scope;
* Detect exit criteria in testing purposes;
* Describe test environment.
Scope
=====
Fuel NSX-T plugin includes NSX-T plugin for Neutron which is developed by
third party. This test plan covers a full functionality of Fuel NSX-T plugin,
include basic scenarios related with NSX Neutron plugin.
Following test types should be provided:
* Smoke/BVT tests
* Integration tests
* System tests
* Destructive tests
* GUI tests
Performance testing will be executed on the scale lab and a custom set of
rally scenarios must be run with NSX environment. Configuration, environment
and scenarios for performance/scale testing should be determine separately.
Intended Audience
=================
This document is intended for project team staff (QA and Dev engineers and
managers) and all other persons who are interested in testing results.
Limitation
==========
Plugin (or its components) has the following limitations:
* VMware NSX-T plugin can be enabled only with Neutron tunnel segmentation.
* NSX Transformers Manager 1.0.0 and 1.0.1 are supported.
Product compatibility matrix
============================
.. list-table:: product compatibility matrix
:widths: 15 10 30
:header-rows: 1
* - Requirement
- Version
- Comment
* - MOS
- 9.0
-
* - OpenStack release
- Mitaka with Ubuntu 14.04
-
* - vSphere
- 6.0
-
* - VMware NSX Transformers
- 1.0.0, 1.0.1
-
**************************************
Evaluation Mission and Test Motivation
**************************************
Project main goal is to build a MOS plugin that integrates a Neutron VMware
NSX-T plugin. This plugin gives opportunity to utilize KVM and VMware compute
cluster. The plugin must be compatible with the version 9.0 of Mirantis
OpenStack and should be tested with software/hardware described in
`product compatibility matrix`_.
See the VMware NSX-T plugin specification for more details.
Evaluation mission
==================
* Find important problems with integration of Neutron VMware NSX-T plugin.
* Verify a specification.
* Provide tests for maintenance update.
* Lab environment deployment.
* Deploy MOS with developed plugin installed.
* Create and run specific tests for plugin/deployment.
* Documentation.
*****************
Target Test Items
*****************
* Install/uninstall Fuel NSX-T plugin
* Deploy Cluster with Fuel NSX-T plugin by Fuel
* Roles of nodes
* controller
* mongo
* compute
* compute-vmware
* cinder-vmware
* Hypervisors:
* Qemu+Vcenter
* KVM
* Storage:
* Ceph
* Cinder
* VMWare vCenter/ESXi datastore for images
* Network
* Neutron with NSX-T plugin
* Additional components
* Ceilometer
* Health Check
* Upgrade master node
* MOS and VMware-NSX-T plugin
* Computes(Nova)
* Launch and manage instances
* Launch instances in batch
* Networks (Neutron)
* Create and manage public and private networks.
* Create and manage routers.
* Port binding / disabling
* Security groups
* Assign vNIC to a VM
* Connection between instances
* Horizon
* Create and manage projects
* Glance
* Create and manage images
* GUI
* Fuel UI
* CLI
* Fuel CLI
*************
Test approach
*************
The project test approach consists of Smoke, Integration, System, Regression
Failover and Acceptance test levels.
**Smoke testing**
The goal of smoke testing is to ensure that the most critical features of Fuel
VMware NSX-T plugin work after new build delivery. Smoke tests will be used by
QA to accept software builds from Development team.
**Integration and System testing**
The goal of integration and system testing is to ensure that new or modified
components of Fuel and MOS work effectively with Fuel VMware NSX-T plugin
without gaps in data flow.
**Regression testing**
The goal of regression testing is to verify that key features of Fuel VMware
NSX-T plugin are not affected by any changes performed during preparation to
release (includes defects fixing, new features introduction and possible
updates).
**Failover testing**
Failover and recovery testing ensures that the target-of-test can successfully
failover and recover from a variety of hardware, software, or network
malfunctions with undue loss of data or data integrity.
**Acceptance testing**
The goal of acceptance testing is to ensure that Fuel VMware NSX-T plugin has
reached a level of stability that meets requirements and acceptance criteria.
***********************
Entry and exit criteria
***********************
Criteria for test process starting
==================================
Before test process can be started it is needed to make some preparation
actions - to execute important preconditions. The following steps must be
executed successfully for starting test phase:
* all project requirements are reviewed and confirmed;
* implementation of testing features has finished (a new build is ready for testing);
* implementation code is stored in GIT;
* test environment is prepared with correct configuration, installed all needed software, hardware;
* test environment contains the last delivered build for testing;
* test plan is ready and confirmed internally;
* implementation of manual tests and autotests (if any) has finished.
Feature exit criteria
=====================
Testing of a feature can be finished when:
* All planned tests (prepared before) for the feature are executed; no defects are found during this run;
* All planned tests for the feature are executed; defects found during this run are verified or confirmed to be acceptable (known issues);
* The time for testing of that feature according to the project plan has run out and Project Manager confirms that no changes to the schedule are possible.
Suspension and resumption criteria
==================================
Testing of a particular feature is suspended if there is a blocking issue
which prevents tests execution. Blocking issue can be one of the following:
* Testing environment for the feature is not ready
* Testing environment is unavailable due to failure
* Feature has a blocking defect, which prevents further usage of this feature and there is no workaround available
* CI tests fail
************
Deliverables
************
List of deliverables
====================
Project testing activities are to be resulted in the following reporting documents:
* Test plan
* Test report
* Automated test cases
Acceptance criteria
===================
* All acceptance criteria for user stories are met.
* All test cases are executed. BVT tests are passed
* Critical and high issues are fixed
* All required documents are delivered
* Release notes including a report on the known errors of that release
**********
Test cases
**********
.. include:: test_suite_smoke.rst
.. include:: test_suite_integration.rst
.. include:: test_suite_scale.rst
.. include:: test_suite_system.rst
.. include:: test_suite_failover.rst

View File

@ -1,123 +0,0 @@
Failover
========
Verify deleting of Fuel NSX-T plugin is impossible if it's used by created cluster.
-----------------------------------------------------------------------------------
ID
##
nsxt_uninstall_negative
Description
###########
It is impossible to remove plugin while at least one environment exists.
Complexity
##########
smoke
Steps
#####
1. Install NSX-T plugin on master node.
2. Create a new environment with enabled NSX-T plugin.
3. Try to delete plugin via cli from master node::
fuel plugins --remove nsxt==1.0.0
Expected result
###############
Alert: "400 Client Error: Bad Request (Can't delete plugin which is enabled for some environment.)" should be displayed.
Check plugin functionality after shutdown primary controller.
-------------------------------------------------------------
ID
##
nsxt_shutdown_controller
Description
###########
Check plugin functionality after shutdown primary controller.
Complexity
##########
core
Steps
#####
1. Log in to the Fuel with preinstalled plugin and deployed ha enviroment with 3 controllers, 1 compute and 1 compute-vmware nodes.
2. Log in to Horizon.
3. Launch two instances in different az (nova and vcenter) and check connectivity to outside world from VMs.
4. Shutdown primary controller.
5. Ensure that VIPs are moved to other controller.
6. Ensure that there is a connectivity to outside world from created VMs.
7. Create a new network and attach it to default router.
8. Launch two instances in different az (nova and vcenter) with new network and check network connectivity via ICMP.
Expected result
###############
Networking works correct after failure of primary controller.
Check cluster functionality after interrupt connection with NSX manager.
------------------------------------------------------------------------
ID
##
nsxt_interrupt_connection
Description
###########
Test verifies that cluster will functional after interrupt connection with NSX manager.
Complexity
##########
core
Steps
#####
1. Log in to the Fuel with preinstalled plugin and deployed enviroment.
2. Launch instances in each az with default network.
3. Disrupt connection with NSX manager and check that controller lost connection with NSX.
4. Try to create new network.
5. Restore connection with NSX manager.
6. Try to create new network again.
7. Launch instance in created network.
8. Ensure that all instances have connectivity to external network.
9. Run OSTF.
Expected result
###############
After restore connection with NSX manager cluster should be fully functional. All created VMs should be operable. All OSTF test cases should be passed.

View File

@ -1,106 +0,0 @@
Integration
===========
Deploy dual-HV env with NSX-T plugin and ceilometer.
----------------------------------------------------
ID
##
nsxt_ceilometer
Description
###########
Check deployment of environment with Fuel NSX-T plugin and Ceilometer.
Complexity
##########
core
Steps
#####
1. Log in to the Fuel UI with preinstalled NSX-T plugin.
2. Create new environment with following parameters:
* Compute: KVM/QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: Ceilometer
3. Add nodes with following roles:
* Controller + Mongo
* Controller + Mongo
* Controller + Mongo
* Compute-vmware
* Compute
4. Configure interfaces on nodes.
5. Configure network settings.
6. Enable and configure NSX-T plugin.
7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers and compute-vmware.
8. Verify networks.
9. Deploy cluster.
10. Run OSTF.
Expected result
###############
Cluster should be deployed and all OSTF test cases should pass.
Deploy dual-HV env with NSX-T plugin and ceph
---------------------------------------------
ID
##
nsxt_ceph
Description
###########
Check deployment of environment with Fuel NSX-T plugin and Ceph.
Complexity
##########
core
Steps
#####
1. Log in to the Fuel UI with preinstalled NSX-T plugin.
2. Create new environment with following parameters:
* Compute: KVM/QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: Ceph
* Additional services: default
3. Add nodes with following roles:
* Controller
* Ceph-OSD
* Ceph-OSD
* Ceph-OSD
* Compute
4. Configure interfaces on nodes.
5. Configure network settings.
6. Enable and configure NSX-T plugin.
7. Configure VMware vCenter Settings. Add 1 vSphere cluster and configure Nova Compute instance on controller.
8. Verify networks.
9. Deploy cluster.
10. Run OSTF.
Expected result
###############
Cluster should be deployed and all OSTF test cases should pass.

View File

@ -1,176 +0,0 @@
Scale
=====
Check scale actions for controller nodes.
-----------------------------------------
ID
##
nsxt_add_delete_controller
Description
###########
Verifies that system functionality is ok when controller has been removed.
Complexity
##########
core
Steps
#####
1. Log in to the Fuel with preinstalled NSX-T plugin.
2. Create a new environment with following parameters:
* Compute: KVM/QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
3. Add nodes with following roles:
* Controller
* Compute
4. Configure interfaces on nodes.
5. Configure network settings.
6. Enable and configure NSX-T plugin.
7. Configure VMware vCenter Settings. Add vSphere clusters and configure Nova Compute instance on controllers.
8. Deploy cluster.
9. Run OSTF.
10. Launch 1 vcenter instance and 1 nova instance.
11. Add 2 controller nodes.
12. Redeploy cluster.
13. Check that all instances are in place.
14. Run OSTF.
15. Delete 2 controller nodes.
16. Redeploy cluster.
17. Check that all instances are in place.
18. Run OSTF.
Expected result
###############
Cluster should be deployed and all OSTF test cases should be passed.
Check scale actions for compute nodes.
--------------------------------------
ID
##
nsxt_add_delete_compute_node
Description
###########
Verify that system functionality is ok after redeploy.
Complexity
##########
core
Steps
#####
1. Connect to the Fuel web UI with preinstalled NSX-T plugin.
2. Create a new environment with following parameters:
* Compute: KVM/QEMU
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: default
3. Add nodes with following roles:
* Controller
* Controller
* Controller
* Compute
4. Configure interfaces on nodes.
5. Configure network settings.
6. Enable and configure NSX-T plugin.
7. Deploy cluster.
8. Run OSTF.
9. Launch instance.
10. Add node with compute role.
11. Redeploy cluster.
12. Check that all instances are in place.
13. Run OSTF.
14. Remove node with compute role from base installation.
15. Redeploy cluster.
16. Check that all instances are in place.
17. Run OSTF.
Expected result
###############
Changing of cluster configuration was successful. Cluster should be deployed and all OSTF test cases should be passed.
Check scale actions for compute-vmware nodes.
---------------------------------------------
ID
##
nsxt_add_delete_compute_vmware_node
Description
###########
Verify that system functionality is ok after redeploy.
Complexity
##########
core
Steps
#####
1. Connect to the Fuel web UI with preinstalled NSX-T plugin.
2. Create a new environment with following parameters:
* Compute: KVM/QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: default
3. Add nodes with following roles:
* Controller
* Controller
* Controller
* Compute-vmware
4. Configure interfaces on nodes.
5. Configure network settings.
6. Enable and configure NSX-T plugin.
7. Configure VMware vCenter Settings. Add 1 vSphere cluster and configure Nova Compute instance on compute-vmware.
8. Deploy cluster.
9. Run OSTF.
10. Launch vcenter vm.
11. Add node with compute-vmware role.
12. Reconfigure vcenter compute clusters.
13. Redeploy cluster.
14. Check that instance is in place.
15. Run OSTF.
16. Remove node with compute-vmware role from base installation.
17. Reconfigure vcenter compute clusters.
18. Redeploy cluster.
19. Run OSTF.
Expected result
###############
Changing of cluster configuration was successful. Cluster should be deployed and all OSTF test cases should be passed.

View File

@ -1,360 +0,0 @@
Smoke
=====
Install Fuel VMware NSX-T plugin.
---------------------------------
ID
##
nsxt_install
Description
###########
Check that plugin can be installed.
Complexity
##########
smoke
Steps
#####
1. Connect to the Fuel master node via ssh.
2. Upload NSX-T plugin.
3. Install NSX-T plugin.
4. Run command 'fuel plugins'.
5. Check name, version and package version of plugin.
Expected result
###############
Output::
[root@nailgun ~]# fuel plugins --install nsx-t-1.0-1.0.0-1.noarch.rpm
Loaded plugins: fastestmirror, priorities
Examining nsx-t-1.0-1.0.0-1.noarch.rpm: nsx-t-1.0-1.0.0-1.noarch
Marking nsx-t-1.0-1.0.0-1.noarch.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package nsx-t-1.0.noarch 0:1.0.0-1 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
Package Arch Version Repository Size
Installing:
nsx-t-1.0 noarch 1.0.0-1 /nsx-t-1.0-1.0.0-1.noarch 20 M
Transaction Summary
Install 1 Package
Total size: 20 M
Installed size: 20 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : nsx-t-1.0-1.0.0-1.noarch 1/1
Verifying : nsx-t-1.0-1.0.0-1.noarch 1/1
Installed:
nsx-t-1.0.noarch 0:1.0.0-1
Complete!
Plugin nsx-t-1.0-1.0.0-1.noarch.rpm was successfully installed.
Plugin was installed successfully using cli.
Uninstall Fuel VMware NSX-T plugin.
-----------------------------------
ID
##
nsxt_uninstall
Description
###########
Check that plugin can be removed.
Complexity
##########
smoke
Steps
#####
1. Connect to fuel node with preinstalled NSX-T plugin via ssh.
2. Remove NSX-T plugin.
3. Run command 'fuel plugins' to ensure the NSX-T plugin has been removed.
Expected result
###############
Output::
[root@nailgun ~]# fuel plugins --remove nsx-t==1.0.0
Loaded plugins: fastestmirror, priorities
Resolving Dependencies
--> Running transaction check
---> Package nsx-t-1.0.noarch 0:1.0.0-1 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
Package Arch Version Repository Size
Removing:
nsx-t-1.0 noarch 1.0.0-1 @/nsx-t-1.0-1.0.0-1.noarch 20 M
Transaction Summary
Remove 1 Package
Installed size: 20 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Erasing : nsx-t-1.0-1.0.0-1.noarch 1/1
Verifying : nsx-t-1.0-1.0.0-1.noarch 1/1
Removed:
nsx-t-1.0.noarch 0:1.0.0-1
Complete!
Plugin nsx-t==1.0.0 was successfully removed.
Plugin was removed.
Verify that all UI elements of NSX-T plugin section meets the requirements.
---------------------------------------------------------------------------
ID
##
nsxt_gui
Description
###########
Verify that all UI elements of NSX-T plugin section meets the requirements.
Complexity
##########
smoke
Steps
#####
1. Login to the Fuel web UI.
2. Click on the Networks tab.
3. Verify that section of NSX-T plugin is present under the Other menu option.
4. Verify that check box 'NSX-T plugin' is enabled by default.
5. Verify that all labels of 'NSX-T plugin' section have the same font style and colour.
6. Verify that all elements of NSX-T plugin section are vertical aligned.
Expected result
###############
All elements of NSX-T plugin section are regimented.
Deploy non-ha cluster with NSX-T plugin and one compute node.
-------------------------------------------------------------
ID
##
nsxt_smoke
Description
###########
Check deployment of non-ha environment with NSX-T plugin and one compute node.
Complexity
##########
smoke
Steps
#####
1. Log in to the Fuel with preinstalled NSX-T plugin.
2. Create a new environment with following parameters:
* Compute: KVM, QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: default
3. Add nodes with following roles:
* Controller
* Compute
4. Configure interfaces on nodes.
5. Configure network settings.
6. Enable and configure NSX-T plugin.
7. Deploy cluster.
8. Run OSTF.
Expected result
###############
Cluster should be deployed successfully and all OSTF tests should be passed.
Deploy HA cluster with NSX-T plugin.
------------------------------------
ID
##
nsxt_bvt
Description
###########
Check deployment of ha environment with NSX-T plugin and vCenter.
Complexity
##########
smoke
Steps
#####
1. Connect to the Fuel web UI with preinstalled NSX-T plugin.
2. Create a new environment with following parameters:
* Compute: KVM, QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: default
3. Add nodes with following roles:
* Controller
* Controller
* Controller
* Compute-vmware, cinder-vmware
* Compute, cinder
4. Configure interfaces on nodes.
5. Configure network settings.
6. Enable and configure NSX-T plugin.
7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers and compute-vmware.
8. Verify networks.
9. Deploy cluster.
10. Run OSTF.
Expected result
###############
Cluster should be deployed and all OSTF tests should be passed.
Check option 'Bypass NSX Manager certificate verification' works correct
------------------------------------------------------------------------
ID
##
nsxt_insecure_false
Description
###########
Check secure connection with NSX Manager.
Complexity
##########
advanced
Steps
#####
1. Provide CA certificate via web UI or through system storage.
2. Install NSX-T plugin.
3. Deploy cluster with one controller.
4. Upload cert file on controller
5. Set the insecure option in false and specify cert file (/etc/neutron/plugins/vmware/nsx.ini).
6. Restart neutron-server.
7. Run OSTF.
Expected result
###############
Cluster should be deployed and all OSTF tests should be passed.
Verify that nsxt driver configured properly after enabling NSX-T plugin
-----------------------------------------------------------------------
ID
##
nsxt_config_ok
Description
###########
Check that all parameters of nsxt driver config files have been filled up with values were entered from GUI. Applicable values that are typically used are described in plugin docs. Root & intermediate certificate are signed, in attachment.
Complexity
##########
advanced
Steps
#####
1. Install NSX-T plugin.
2. Enable plugin on tab Networks -> NSX-T plugin.
3. Fill the form with corresponding values.
4. Do all things that are necessary to provide interoperability of NSX-T plugin and NSX Manager with certificate.
5. Check Additional settings. Fill the form with corresponding values. Save settings by pressing the button.
Expected result
###############
Check that nsx.ini on controller nodes is properly configured.

View File

@ -1,667 +0,0 @@
System
======
Set up for system tests
-----------------------
ID
##
nsxt_setup_system
Description
###########
Deploy environment with 3 controllers and 1 Compute node. Nova Compute instances are running on controllers and compute-vmware nodes. It is a config for all system tests.
Complexity
##########
core
Steps
#####
1. Log in to the Fuel web UI with pre-installed NSX-T plugin.
2. Create new environment with the following parameters:
* Compute: KVM, QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: default
3. Add nodes with following roles:
* Controller
* Compute-vmware
* Compute
* Compute
4. Configure interfaces on nodes.
5. Configure network settings.
6. Enable and configure NSX-T plugin.
7. Configure VMware vCenter Settings. Add 2 vSphere clusters, configure Nova Compute instances on controller and compute-vmware.
8. Verify networks.
9. Deploy cluster.
10. Run OSTF.
Expected result
###############
Cluster should be deployed and all OSTF test cases should pass.
Check connectivity from VMs to public network
---------------------------------------------
ID
##
nsxt_public_network_availability
Description
###########
Verifies that public network is available.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
4. Send ping from each instance to 8.8.8.8.
Expected result
###############
Pings should get a response.
Check abilities to create and terminate networks on NSX
-------------------------------------------------------
ID
##
nsxt_manage_networks
Description
###########
Check ability to create/delete networks and attach/detach it to router.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Create private networks net_01 and net_02 with subnets.
4. Launch 1 instance on each network. Instances should belong to different az (nova and vcenter).
5. Attach (add interface) net_01 to default router. Check that instances can't communicate with each other.
6. Attach net_02 to default router.
7. Check that instances can communicate with each other via router.
8. Detach (delete interface) net_01 from default router.
9. Check that instances can't communicate with each other.
10. Delete created instances.
11. Delete created networks.
Expected result
###############
No errors.
Check abilities to bind port on NSX to VM, disable and enable this port
-----------------------------------------------------------------------
ID
##
nsxt_manage_ports
Description
###########
Verifies that system can not manipulate with port (plugin limitation).
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
4. Check that instances can communicate with each other.
5. Disable port attached to instance in nova az.
6. Check that instances can't communicate with each other.
7. Enable port attached to instance in nova az.
8. Check that instances can communicate with each other.
9. Disable port attached to instance in vcenter az.
10. Check that instances can't communicate with each other.
11. Enable port attached to instance in vcenter az.
12. Check that instances can communicate with each other.
13. Delete created instances.
Expected result
###############
NSX-T plugin should be able to manage admin state of ports.
Check abilities to assign multiple vNIC to a single VM
------------------------------------------------------
ID
##
nsxt_multiple_vnics
Description
###########
Check abilities to assign multiple vNICs to a single VM.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Add two private networks (net01 and net02).
4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.101.0/24) to each network.
NOTE: We have a constraint about network interfaces. One of subnets should have gateway and another should not. So disable gateway on that subnet.
5. Launch instance VM_1 with image TestVM-VMDK and flavor m1.tiny in vcenter az.
6. Launch instance VM_2 with image TestVM and flavor m1.tiny in nova az.
7. Check abilities to assign multiple vNIC net01 and net02 to VM_1.
8. Check abilities to assign multiple vNIC net01 and net02 to VM_2.
9. Send icmp ping from VM_1 to VM_2 and vice versa.
Expected result
###############
VM_1 and VM_2 should be attached to multiple vNIC net01 and net02. Pings should get a response.
Check connectivity between VMs attached to different networks with a router between them
----------------------------------------------------------------------------------------
ID
##
nsxt_connectivity_diff_networks
Description
###########
Test verifies that there is a connection between networks connected through the router.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Add two private networks (net01 and net02).
4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24) to each network. Disable gateway for both subnets.
5. Launch 1 instance in each network. Instances should belong to different az (nova and vcenter).
6. Create new router (Router_01), set gateway and add interface to external network.
7. Enable gateway on subnets. Attach private networks to created router.
8. Verify that VMs of different networks communicate between each other.
9. Add one more router (Router_02), set gateway and add interface to external network.
10. Detach net_02 from Router_01 and attach it to Router_02.
11. Assign floating IPs for all created VMs.
12. Check that default security group allows the ICMP.
13. Verify that VMs of different networks communicate between each other by FIPs.
14. Delete instances.
15. Detach created networks from routers.
16. Delete created networks.
17. Delete created routers.
Expected result
###############
NSX-T plugin should be able to create/delete routers and assign floating ip on instances.
Check abilities to create and delete security group
---------------------------------------------------
ID
##
nsxt_manage_secgroups
Description
###########
Verifies that creation and removing security group works fine.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Create new security group with default rules.
4. Add ingress rule for ICMP protocol.
5. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
6. Attach created security group to instances.
7. Check that instances can ping each other.
8. Delete ingress rule for ICMP protocol.
9. Check that instances can't ping each other.
10. Delete instances.
11. Delete security group.
Expected result
###############
NSX-T plugin should be able to create/delete security groups and add/delete rules.
Check isolation between VMs in different tenants
------------------------------------------------
ID
##
nsxt_different_tenants
Description
###########
Verifies isolation in different tenants.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Create new tenant with new user.
4. Activate new project.
5. Create network with subnet.
6. Create router, set gateway and add interface.
7. Launch instance and associate floating ip with vm.
8. Activate default tenant.
9. Launch instance (use the default network) and associate floating ip with vm.
10. Check that default security group allow ingress icmp traffic.
11. Send icmp ping between instances in different tenants via floating ip.
Expected result
###############
Instances on different tenants can communicate between each other only via floating ip.
Check connectivity between VMs with same ip in different tenants
----------------------------------------------------------------
ID
##
nsxt_same_ip_different_tenants
Description
###########
Verifies connectivity with same IP in different tenants.
Complexity
##########
advanced
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Create 2 non-admin tenants 'test_1' and 'test_2' with common admin user.
4. Activate project 'test_1'.
5. Create network 'net1' and subnet 'subnet1' with CIDR 10.0.0.0/24
6. Create router 'router1' and attach 'net1' to it.
7. Create security group 'SG_1' and add rule that allows ingress icmp traffic
8. Launch two instances (VM_1 and VM_2) in created network with created security group. Instances should belong to different az (nova and vcenter).
9. Assign floating IPs for created VMs.
10. Activate project 'test_2'.
11. Create network 'net2' and subnet 'subnet2' with CIDR 10.0.0.0/24
12. Create router 'router2' and attach 'net2' to it.
13. Create security group 'SG_2' and add rule that allows ingress icmp traffic
14. Launch two instances (VM_3 and VM_4) in created network with created security group. Instances should belong to different az (nova and vcenter).
15. Assign floating IPs for created VMs.
16. Verify that VMs with same ip on different tenants communicate between each other by FIPs. Send icmp ping from VM_1 to VM_3, VM_2 to VM_4 and vice versa.
Expected result
###############
Pings should get a response.
Verify that only the associated MAC and IP addresses can communicate on the logical port
----------------------------------------------------------------------------------------
ID
##
nsxt_bind_mac_ip_on_port
Description
###########
Verify that only the associated MAC and IP addresses can communicate on the logical port.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Log in to Horizon Dashboard.
3. Launch two instances in default network. Instances should belong to different az (nova and vcenter).
4. Verify that traffic can be successfully sent from and received on the MAC and IP address associated with the logical port.
5. Configure a new IP address from the subnet not like original one on the instance associated with the logical port.
* ifconfig eth0 down
* ifconfig eth0 192.168.99.14 netmask 255.255.255.0
* ifconfig eth0 up
6. Confirm that the instance cannot communicate with that IP address.
7. Revert IP address. Configure a new MAC address on the instance associated with the logical port.
* ifconfig eth0 down
* ifconfig eth0 hw ether 00:80:48:BA:d1:30
* ifconfig eth0 up
8. Confirm that the instance cannot communicate with that MAC address and the original IP address.
Expected result
###############
Instance should not communicate with new ip and mac addresses but it should communicate with old IP.
Check creation instance in the one group simultaneously
-------------------------------------------------------
ID
##
nsxt_batch_instance_creation
Description
###########
Verifies that system could create and delete several instances simultaneously.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Navigate to Project -> Compute -> Instances
3. Launch 5 instances VM_1 simultaneously in vcenter az in default net. Verify that creation was successful.
4. Launch 5 instances VM_2 simultaneously in nova az in default net. Verify that creation was successful.
5. Delete all VMs simultaneously.
Expected result
###############
All instance should be created and deleted without any error.
Verify that instances could be launched on enabled compute host
---------------------------------------------------------------
ID
##
nsxt_manage_compute_hosts
Description
###########
Check instance creation on enabled cluster.
Complexity
##########
core
Steps
#####
1. Set up for system tests.
2. Disable one of compute host in each availability zone (vcenter and nova).
3. Create several instances in both az.
4. Check that instances were created on enabled compute hosts.
5. Disable second compute host and enable first one in each availability zone (vcenter and nova).
6. Create several instances in both az.
7. Check that instances were created on enabled compute hosts.
Expected result
###############
All instances were created on enabled compute hosts.
Fuel create mirror and update core repos on cluster with NSX-T plugin
---------------------------------------------------------------------
ID
##
nsxt_update_core_repos
Description
###########
Fuel create mirror and update core repos in cluster with NSX-T plugin
Complexity
##########
core
Steps
#####
1. Set up for system tests
2. Log into controller node via Fuel CLI and get PIDs of services which were launched by plugin and store them:
`ps ax | grep neutron-server`
3. Launch the following command on the Fuel Master node:
`fuel-mirror create -P ubuntu -G mos ubuntu`
4. Run the command below on the Fuel Master node:
`fuel-mirror apply -P ubuntu -G mos ubuntu --env <env_id> --replace`
5. Run the command below on the Fuel Master node:
`fuel --env <env_id> node --node-id <node_ids_separeted_by_coma> --tasks setup_repositories`
And wait until task is done.
6. Log into controller node and check plugins services are alive and their PID are not changed.
7. Check all nodes remain in ready status.
8. Rerun OSTF.
Expected result
###############
Cluster (nodes) should remain in ready state.
OSTF tests should be passed on rerun.
Configuration with multiple NSX managers
----------------------------------------
ID
##
nsxt_multiple_nsx_managers
Description
###########
NSX-T plugin can configure several NSX managers at once.
Complexity
##########
core
Steps
#####
1. Create cluster.
Prepare 2 NSX managers.
2. Configure plugin.
3. Set comma separated list of NSX managers.
nsx_api_managers = 1.2.3.4,1.2.3.5
4. Deploy cluster.
5. Run OSTF.
6. Power off the first NSX manager.
7. Run OSTF.
8. Power off the second NSX manager.
Power on the first NSX manager.
9. Run OSTF.
Expected result
###############
OSTF tests should be passed.
Deploy HOT
----------
ID
##
nsxt_hot
Description
###########
Template creates flavor, net, security group, instance.
Complexity
##########
smoke
Steps
#####
1. Deploy cluster with NSX.
2. Copy nsxt_stack.yaml to controller on which heat will be run.
3. On controller node run command::
. ./openrc
heat stack-create -f nsxt_stack.yaml teststack
4. Wait for complete creation of stack.
5. Check that created instance is operable.
Expected result
###############
All objects related to stack should be successfully created.

View File

@ -1,177 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
qthelp:
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
@echo
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/FuelNSXtplugin.qhcp"
@echo "To view the help file:"
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/FuelNSXtplugin.qhc"
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/FuelNSXtplugin"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/FuelNSXtplugin"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -1,255 +0,0 @@
# -*- coding: utf-8 -*-
#
# Fuel NSX-T plugin documentation build configuration file, created by
# sphinx-quickstart on Fri Aug 14 12:14:29 2015.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [ ]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Fuel NSX-T plugin'
copyright = u'2016, Mirantis Inc.'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0.0'
# The full version, including alpha/beta/rc tags.
release = '1.0-1.0.0-1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
#exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
#html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'FuelNSX-Tplugindoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
'classoptions': ',openany,oneside',
'babel': '\\usepackage[english]{babel}'
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'nsx-t-user-guide-' + version + '.tex', u'Fuel NSX-T plugin documentation',
u'Mirantis Inc.', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'fuelnsxtplugin', u'Fuel NSX-T plugin documentation',
[u'Mirantis Inc.'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'FuelNSX-Tplugin', u'Fuel NSX-T plugin documentation',
u'Mirantis Inc.', 'FuelNSX-Tplugin', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# Insert footnotes where they are defined instead of at the end.
pdf_inline_footnotes = True

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

View File

@ -1,27 +0,0 @@
.. Fuel NSX-T plugin documentation master file
Welcome to Fuel NSX-T plugin's documentation!
============================================
The Fuel NSX-T plugin allows end user to use VMware NSX Transformers SDN
as network backend for Neutron.
The plugin supports VMware NSX Transformers version 1.0.0 and 1.0.1.
The pre-built package of the plugin is in
`Fuel Plugin Catalog <https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins>`_.
Documentation contents
======================
.. toctree::
:maxdepth: 2
source/installation
source/environment
source/configuration
source/limitations
source/usage
source/troubleshooting
source/release-notes
source/build

View File

@ -1,49 +0,0 @@
How to build the plugin from source
===================================
To build the plugin, you first need to install fuel-plugin-builder_ 4.1.0
.. code-block:: bash
$ pip install fuel-plugin-builder==4.1.0
Build the plugin:
.. code-block:: bash
$ git clone https://git.openstack.org/openstack/fuel-plugin-nsx-t
$ cd fuel-plugin-nsx-t/
The librarian-puppet_ ruby package is required to be installed. It is used to fetch
upstream fuel-library_ puppet modules that the plugin uses. It can be installed via
the *gem* package manager:
.. code-block:: bash
$ gem install librarian-puppet
or if you are using ubuntu linux, you can install it from the repository:
.. code-block:: bash
$ apt-get install librarian-puppet
and build the plugin:
.. code-block:: bash
$ fpb --build .
fuel-plugin-builder will produce an .rpm package of the plugin which you need to
upload to the Fuel master node:
.. code-block:: bash
$ ls nsx*.rpm
nsx-t-1.0-1.0.0-1.noarch.rpm
.. _fuel-plugin-builder: https://pypi.python.org/pypi/fuel-plugin-builder/4.1.0
.. _librarian-puppet: http://librarian-puppet.com
.. _fuel-library: https://github.com/openstack/fuel-library

View File

@ -1,86 +0,0 @@
Configuration
=============
Node interfaces for overlay traffic
-----------------------------------
NSX Transformers uses STT protocol to carry virtual machines traffic. Plugin
requires that interfaces which are going to be used for STT traffic must not
carry any other traffic (PXE, storage, openstack management).
.. image:: ../image/stt-interface.png
Switch to the :guilabel:`Networks` tab of the Fuel web UI and click the
:guilabel:`Settings`/`Other` label. The plugin checkbox is enabled
by default. The screenshot below shows only the settings in focus:
.. image:: ../image/nsxt-settings.png
:scale: 60 %
The plugin contains the following settings:
#. Bypass NSX Manager certificate verification -- if enabled, the HTTPS
connection to NSX Manager is not verified. Otherwise, the two following
options are available:
* The setting "CA certificate file" appears below making it possible to
upload a CA certificate that issued the NSX Manager certificate.
* With no CA certificate provided, the NSX Manager certificate is verified
against the CA certificate bundle that comes with Ubuntu 14.04.
#. NSX Manager -- IP address or hostname, multiple values can be separated by
comma. If you are going to use hostname in this textbox, ensure that your
OpenStack controller can resolve the hostname. Add necessary DNS servers in
the :guilabel:`Host OS DNS Servers` section.
OpenStack Controller must have L3 connectivity with NSX Manager through
the Public network which is used as default route.
#. Overlay transport zone ID -- UUID of overlay (STT) transport zone which must
be pre-created in NSX Manager.
#. VLAN transport zone ID --- UUID of transport zone which represents network
that connects virtual machines with physical infrastructure.
#. Tier-0 router ID -- UUID of tier0 router that must exist in NSX Transformers.
#. Edge cluster -- UUID of NSX edge nodes cluster that must be installed and
configured.
#. Uplink profile ID -- UUID of uplink profile which specifies STT interfaces
configuration (e.g. MTU value).
#. IP pool ID for controller VTEPs -- UUID of IP pool that will be assigned to
controllers STT interfaces.
#. Colon separated pnics pairs for controller nodes -- this field sets
correspondence between physical NIC and uplink name that is used in "Uplink
profile ID" on controller nodes, e.g. ``enp0s1:uplink``. Each pair must be one
separate line.
.. warning::
Uplink name must exactly match value of "Active uplink" or "Standby
uplink" in uplink switch profile that was specified above.
#. IP pool ID for compute VTEPs -- UUID of IP pool that will be assigned to
STT interfaces of compute nodes.
#. Colon separated pnics pairs for compute nodes -- this fields sets
correspondence between physical NIC and uplink name that is used in "Uplink
profile ID" on compute nodes, e.g. "enp0s1:uplink". Each pair must be one
separate line.
#. Floating IP ranges -- dash-separated IP addresses allocation pool for
external network, e.g. "172.16.212.2-172.16.212.40".
#. External network CIDR -- network in CIDR notation that includes floating IP ranges.
#. Gateway -- default gateway for the external network; if not defined, the
first IP address of the network is used.
#. Internal network CIDR -- network in CIDR notation for use as internal.
#. DNS for internal network -- comma-separated IP addresses of DNS server for
internal network.

View File

@ -1,62 +0,0 @@
OpenStack environment notes
===========================
Environment configuration
-------------------------
The Fuel NSX-T plugin cannot deploy NSX Transformers.
Before you start OpenStack deployment, verify that your VMware NSX Transformers
is configured and functions properly:
* VLAN transport zone must present
* Overlay transport zone must present
* tier0 router must be created
* uplink profile for OpenStack nodes must be created
* NSX edge cluster must be formed
* IP address pool for OpenStack controllers and compute nodes must exist
To use the NSX-T plugin, create a new OpenStack environment using the Fuel web
UI by doing the following:
#. On the :guilabel:`Networking setup` configuration step, select
:guilabel:`Neutron with NSX-T plugin` radio button
.. image:: ../image/neutron-nsxt-item.png
:scale: 70 %
ESXi hosts that participate in vCenter cluster that is used on the VMware tab
must be manually added as transport nodes in NSX Manager. Hosts must be added
prior OpenStack deployment.
Network setup
-------------
Pay attention to on which interface you assign the *Public* network. The
OpenStack controllers must have connectivity with the NSX Manager host
through the *Public* network since the *Public* network is the default
route for packets.
If NSX Manager and NSX Controllers are going to communicate with OpenStack
controllers and computes through Public network then setting :guilabel:`Assign
public network to all nodes` (Networks tab -> Settings/Other label) must be
enabled. Otherwise compute node will be communicating with NSX Manager through
controller that perform NAT which will hide compute node IP addresses and will
prevent them to register in NSX management plane.
.. image:: ../image/nsx-t-public.png
:scale: 100%
Another way is to locate NSX nodes in OpenStack management network. In this
setup there is no need to assign public network to all nodes, because OpenStack
and NSX nodes has L2 connectivity and no NAT is performed. OpenStack
controllers and computes will still use Public network as default route.
.. image:: ../image/nsx-t-mgmt.png
:scale: 100%
During the deployment, the plugin creates a simple network topology for
the admin tenant. The plugin creates a provider network which connects the
tenants with the transport (physical) network: one internal network and
a router that is connected to both networks.

View File

@ -1,41 +0,0 @@
Installation
============
#. Download the plugin .rpm package from the `Fuel plugin catalog`_.
#. Upload the package to the Fuel master node.
#. Install the plugin with the ``fuel`` command-line tool:
.. code-block:: bash
[root@nailgun ~] fuel plugins --install nsx-t-1.0-1.0.0-1.noarch.rpm
#. Verify that the plugin installation is successful:
.. code-block:: bash
[root@nailgun ~] fuel plugins list
id | name | version | package_version | releases
---+-------+---------+-----------------+--------------------
6 | nsx-t | 1.0.0 | 4.0.0 | ubuntu (mitaka-9.0)
After the installation, the plugin can be used on new OpenStack clusters;
you cannot enable the plugin on the deployed clusters.
Uninstallation
--------------
Before uninstalling the plugin, ensure there no environments left that use the
plugin, otherwise the uninstallation is not possible.
To uninstall the plugin, run following:
.. code-block:: bash
[root@nailgun ~] fuel plugins --remove nsx-t==1.0.0
.. _Fuel plugin catalog: https://www.mirantis.com/products/openstack-drivers-and-plugins/fuel-plugins

View File

@ -1,50 +0,0 @@
Limitations
===========
Fuel NSX-T plugin does not support SSL verification
---------------------------------------------------
Plugin does not allow user to enable SSL certificate verification during
deployment. As workaround it is possible to store certificate at post
deployment stage and enable it in Neutron NSX configuration file
(``/etc/neutron/plugins/vmware/nsx.ini``) (Neutron NSX plugin supports SSL
certificate).
NSX-T plugin cannot be used simultaneously with NSXv plugin
-----------------------------------------------------------
Since both plugins provide the same network component called
``network:neutron:core:nsx`` it is not possible to use both plugins together.
The plugin is not hotpluggable
------------------------------
It is not possible to enable plugin on already existing OpenStack.
Ubuntu cloud archive distribution is not supported
--------------------------------------------------
.. image:: ../image/uca-archive.png
:scale: 70 %
Fuel 9.0 provides two options for OpenStack release. One is plain Ubuntu
distribution, another one includes Ubuntu cloud archive. The plugin does not
support Ubuntu cloud archive packages.
Ironic service is not supported
-------------------------------
Ironic service requires control of top of rack switches and can not be used
with NSX Transformers.
OpenStack environment reset/deletion
------------------------------------
The Fuel NSX-T plugin does not provide a cleanup mechanism when an OpenStack
environment is reset or deleted. All registered transport nodes, logical
switches and routers remain intact, it is up to the operator to delete them and
free resources.

View File

@ -1,6 +0,0 @@
Release notes
=============
Release notes for Fuel NSX-T plugin 1.0.0:
* Plugin is compatible with Fuel 9.0 and 9.1.

View File

@ -1,84 +0,0 @@
.. _troubleshooting:
Troubleshooting
===============
Neutron NSX plugin issues
-------------------------
The Neutron NSX-T plugin does not have a separate log file, its messages
are logged by the neutron server. The default log file on OpenStack controllers
for neutron server is ``/var/log/neutron/server.log``
Inability to resolve NSX Manager hostname
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you see following message:
::
2016-10-18 ... INFO vmware_nsx.plugins.nsx_v3.plugin [-] Starting NsxV3Plugin
2016-10-18 ... INFO vmware_nsx.nsxlib.v3.cluster [-] Endpoint 'https://nsxmanager.mydom.org'
changing from state 'INITIALIZED' to 'DOWN'
2016-10-18 ... WARNING vmware_nsx.nsxlib.v3.cluster [-] Failed to validate API cluster endpoint
'[DOWN] https://nsxmanager.mydom.org' due to: HTTPSConnectionPool(host='nsxmanager.mydom.org',
port=443): Max retries exceeded with url: /a...nes (Caused by NewConnectionError(
'<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff69b3c4b90>:
Failed to establish a new connection: [Errno -2] Name or service not known',))
2016-10-18 ... ERROR neutron.service [-] Unrecoverable error: please check log for details.
2016-10-18 ... ERROR neutron.service Traceback (most recent call last):
...
2016-10-18 ... ERROR neutron.service ServiceClusterUnavailable: Service cluster:
'https://nsxmanager.mydom.org' is unavailable. Please, check NSX setup and/or configuration
It means that the controller cannot resolve the NSX Manager hostname
(``nsxmanager.mydom.org`` in this example) that is specified in the
configuration file.
Check that the DNS server IP addresses that you specified in the
:guilabel:`Host OS DNS Servers` section of the Fuel web UI are correct
and reachable by all controllers; pay attention to that the default route
for controllers is *Public* network. Also, verify that the hostname that you
entered is correct by trying to resolve it via the ``host`` or ``dig`` programs.
SSL/TLS certificate problems
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
2016-10-28 12:32:26.086 2832 INFO vmware_nsx.nsxlib.v3.cluster [-] Endpoint
'https://172.16.0.249' changing from state 'INITIALIZED' to 'DOWN'
2016-10-28 12:32:26.087 2832 WARNING vmware_nsx.nsxlib.v3.cluster [-] Failed to
validate API cluster endpoint '[DOWN] https://172.16.0.249' due to: [Errno 1]
_ssl.c:510: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
This error indicates that you enabled the SSL/TLS certificate verification, but
the certificate verification failed during connection to NSX Manager.
The possible causes are:
#. NSX Manager certificate expired. Log into NSX Manager web GUI and check
certificate validation dates.
#. Check if the certification authority (CA) certificate is still valid.
The CA certificate is specified by ``ca_file`` directive in ``nsx.ini``
(usually ``/etc/neutron/plugins/vmware/nsx.ini`` on controller node).
User access problems
~~~~~~~~~~~~~~~~~~~~
::
2016-10-28 12:28:20.060 18259 INFO vmware_nsx.plugins.nsx_v3.plugin [-] Starting NsxV3Plugin
2016-10-28 12:28:20.201 18259 WARNING vmware_nsx.nsxlib.v3.client [-] The HTTP request returned error code 403,
whereas 200 response codes were expected. Response body {u'module_name': u'common-service',
u'error_message': u'The username/password combination is incorrect or the account specified has been locked.', u'error_code': u'98'}
2016-10-28 12:28:20.202 18259 INFO vmware_nsx.nsxlib.v3.cluster [-] Endpoint 'https://172.16.0.249' changing
from state 'INITIALIZED' to 'DOWN'
2016-10-28 12:28:20.203 18259 WARNING vmware_nsx.nsxlib.v3.cluster [-] Failed to validate API cluster endpoint
'[DOWN] https://172.16.0.249' due to: Unexpected error from backend manager (['172.16.0.249']) for GET https://172.16.0.249/api/
v1/transport-zones : The username/password combination is incorrect or the account specified has been locked.
Verify that username and password that are entered on the plugins pane are
correct.

View File

@ -1,36 +0,0 @@
Usage
=====
The easiest way to check that the plugin works as expected is to create a
network or router using the ``neutron`` command-line tool:
::
[root@nailgun ~]# ssh node-4 # node-4 is a controller node
root@node-4:~# . openrc
root@node-4:~# neutron router-create r1
You can monitor the plugin actions in ``/var/log/neutron/server.log`` and see
how edges appear in the list of the ``Networking & Security -> NSX Edges``
pane in vSphere Web Client. If you see error messages, check the
:ref:`Troubleshooting <troubleshooting>` section.
STT MTU considerations
----------------------
NSX Transformers uses STT protocol to encapsulate VM traffic. The protocol adds
additional data to the packet. Consider increasing MTU on the network equipment
connected to hosts that will emit STT traffic.
Consider the following calculation:
Outer IPv4 header == 20 bytes
Outer TCP header == 24 bytes
STT header == 18 bytes
Inner Ethernet frame == 1518 (14 bytes header, 4 bytes 802.1q header, 1500 Payload)
Summarizing all of these we get 1580 bytes. Consider increasing MTU on the
network hardware up to 1600 bytes.

View File

@ -1,154 +0,0 @@
attributes:
metadata:
group: network
insecure:
value: true
label: ""
description: ''
weight: 1
type: 'hidden'
ca_file:
value: ''
label: 'CA certificate file'
description: 'Specify a CA certificate file to use in NSX Manager certificate verification'
weight: 5
type: 'file'
restrictions:
- condition: "settings:nsx-t.insecure.value == true"
action: "hide"
nsx_api_managers:
value: ''
label: 'NSX Manager'
description: 'Multiple IP addresses can be separated by commas'
weight: 10
type: "text"
regex:
source: &non_empty '^.+$'
error: 'Enter IPv4 address'
nsx_api_user:
value: admin
label: 'User'
description: ''
weight: 15
type: "text"
regex:
source: *non_empty
error: 'User field cannot be empty'
nsx_api_password:
value: ''
label: 'Password'
description: ''
weight: 20
type: "password"
regex:
source: *non_empty
error: 'Password field cannot be empty'
default_overlay_tz_uuid:
value: ''
label: 'Overlay transport zone ID'
description: ''
weight: 25
type: "text"
regex:
source: &uuid '[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}'
error: 'Enter transport zone UUID'
default_vlan_tz_uuid:
value: ''
label: 'VLAN transport zone ID'
description: ''
weight: 30
type: "text"
regex:
source: *uuid
error: 'Enter transport zone UUID'
default_tier0_router_uuid:
value: ''
label: 'Tier-0 router ID'
weight: 35
regex:
source: *uuid
error: 'Enter tier-0 router UUID'
type: "text"
default_edge_cluster_uuid:
value: ''
label: 'Edge cluster'
weight: 40
regex:
source: *uuid
error: 'Enter cluster UUID'
type: "text"
uplink_profile_uuid:
value: ''
label: 'Uplink profile ID'
weight: 45
regex:
source: *uuid
error: 'Enter uplink profile ID'
type: "text"
controller_ip_pool_uuid:
value: ''
label: 'IP pool ID for controller VTEPs'
weight: 50
regex:
source: *uuid
error: 'Enter IP pool ID'
type: "text"
controller_pnics_pairs:
value: "enp0s1:uplink-1"
label: 'STT pnic:uplink pairs for openstack controllers'
description: 'Colon separated pnics pairs for controllers, each pair on separate line'
weight: 55
type: "textarea"
compute_ip_pool_uuid:
value: ''
label: 'IP pool ID for compute VTEPs'
weight: 60
regex:
source: *uuid
error: 'Enter IP pool ID'
type: "text"
compute_pnics_pairs:
value: "enp0s1:uplink-1"
label: 'STT pnic:uplink pairs for openstack computes'
description: 'Colon separated pnics pairs for compute nodes, each pair on separate line'
weight: 65
type: "textarea"
floating_ip_range:
value: ''
label: 'Floating IP range'
description: 'Dash separated IP addresses allocation pool from external network, e.g. "start_ip_address-end_ip_address"'
weight: 70
type: 'text'
regex:
source: '^(?:[0-9]{1,3}\.){3}[0-9]{1,3}-(?:[0-9]{1,3}\.){3}[0-9]{1,3}$'
error: 'Invalid IP ranges'
floating_net_cidr:
value: ''
label: 'External network CIDR'
description: 'Network in CIDR notation that includes floating IP ranges'
weight: 75
type: 'text'
regex:
source: '^(?:[0-9]{1,3}\.){3}[0-9]{1,3}/[0-9]{1,2}$'
error: 'Invalid network in CIDR notation'
floating_net_gw:
value: ''
label: 'Gateway'
description: 'Default gateway for external network, if not defined, first IP address of the network is used'
weight: 80
type: 'text'
internal_net_cidr:
value: ''
label: 'Internal network CIDR'
description: 'Network in CIDR notation for use as internal'
weight: 85
type: 'text'
regex:
source: '^(?:[0-9]{1,3}\.){3}[0-9]{1,3}/[0-9]{1,2}$'
error: 'Invalid network in CIDR notation'
internal_net_dns:
value: ''
label: 'DNS for internal network'
description: 'Comma separated IP addresses of DNS server for internal network'
weight: 90
type: 'text'

View File

@ -1,23 +0,0 @@
name: nsx-t
title: NSX Transformers plugin
version: '1.0.0'
description: ''
fuel_version: ['9.0']
licenses: ['Apache License Version 2.0']
authors:
- 'Artem Savinov, Mirantis'
- 'Igor Zinovik, Mirantis'
homepage: https://github.com/openstack/fuel-plugin-nsx-t
groups: ['network']
# The plugin is compatible with releases in the list
releases:
- os: ubuntu
version: mitaka-9.0
mode: ['ha']
deployment_scripts_path: deployment_scripts/
repository_path: repositories/ubuntu
# Version of plugin package
package_version: '4.0.0'
is_hotpluggable: false

View File

@ -1,14 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""

@ -1 +0,0 @@
Subproject commit 15681971a6ff4adc9a8fd0b567f7443a3db6ffab

View File

@ -1,14 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""

View File

@ -1,404 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
import time
import paramiko
from proboscis.asserts import assert_true
from devops.helpers.helpers import icmp_ping
from devops.helpers.helpers import tcp_ping
from devops.helpers.helpers import wait
from fuelweb_test import logger
from fuelweb_test.helpers.ssh_manager import SSHManager
from fuelweb_test.helpers.utils import pretty_log
from helpers import settings
# Defaults
external_net_name = settings.ADMIN_NET
zone_image_maps = {
'vcenter': 'TestVM-VMDK',
'nova': 'TestVM',
'vcenter-cinder': 'TestVM-VMDK'
}
instance_creds = (settings.VM_USER, settings.VM_PASS)
def create_instance(os_conn, net=None, az='nova', sg_names=None,
flavor_name='m1.micro', timeout=180, **kwargs):
"""Create instance with specified az and flavor.
:param os_conn: OpenStack
:param net: network object (default is private net)
:param az: availability zone name
:param sg_names: list of security group names
:param flavor_name: name of flavor
:param timeout: seconds to wait creation
:return: vm
"""
sg_names = sg_names if sg_names else ['default']
def find_by_name(objects, name):
for obj in objects:
if obj.name == name:
return obj
image = find_by_name(os_conn.nova.images.list(), zone_image_maps[az])
flavor = find_by_name(os_conn.nova.flavors.list(), flavor_name)
net = net if net else os_conn.get_network(settings.PRIVATE_NET)
sg = [os_conn.get_security_group(name) for name in sg_names]
vm = os_conn.create_server(availability_zone=az,
timeout=timeout,
image=image,
net_id=net['id'],
security_groups=sg,
flavor_id=flavor.id,
**kwargs)
return vm
def check_instances_state(os_conn):
"""Check that instances were not deleted and have 'active' status."""
instances = os_conn.nova.servers.list()
for inst in instances:
assert_true(not os_conn.is_srv_deleted(inst))
assert_true(os_conn.get_instance_detail(inst).status == 'ACTIVE')
def check_connection_vms(ip_pair, command='pingv4', result_of_command=0,
timeout=30, interval=5):
"""Check network connectivity between instances.
:param ip_pair: type dict, {ip_from: [ip_to1, ip_to2, etc.]}
:param command: type string, key 'pingv4', 'pingv6' or 'arping'
:param result_of_command: type integer, exit code of command execution
:param timeout: wait to get expected result
:param interval: interval of executing command
"""
commands = {
'pingv4': 'ping -c 5 {}',
'pingv6': 'ping6 -c 5 {}',
'arping': 'sudo arping -I eth0 {}'
}
msg = 'Command "{0}", Actual exit code is NOT {1}'
for ip_from in ip_pair:
with get_ssh_connection(ip_from, *instance_creds, timeout=60*5) as ssh:
for ip_to in ip_pair[ip_from]:
logger.info('Check connection from {0} to {1}'.format(
ip_from, ip_to))
cmd = commands[command].format(ip_to)
wait(lambda:
execute(ssh, cmd)['exit_code'] == result_of_command,
interval=interval,
timeout=timeout,
timeout_msg=msg.format(cmd, result_of_command))
def check_connection_through_host(remote, ip_pair, command='pingv4',
result_of_command=0, timeout=30,
interval=5):
"""Check network connectivity between instances.
:param ip_pair: type list, ips of instances
:param remote: access point IP
:param command: type string, key 'pingv4', 'pingv6' or 'arping'
:param result_of_command: type integer, exit code of command execution
:param timeout: wait to get expected result
:param interval: interval of executing command
"""
commands = {
'pingv4': 'ping -c 5 {}',
'pingv6': 'ping6 -c 5 {}',
'arping': 'sudo arping -I eth0 {}'
}
msg = 'Command "{0}", Actual exit code is NOT {1}'
for ip_from in ip_pair:
for ip_to in ip_pair[ip_from]:
logger.info('Check ping from {0} to {1}'.format(ip_from, ip_to))
cmd = commands[command].format(ip_to)
wait(lambda:
remote_execute_command(
remote,
ip_from,
cmd,
wait=timeout)['exit_code'] == result_of_command,
interval=interval,
timeout=timeout,
timeout_msg=msg.format(cmd, result_of_command))
def ping_each_other(ips, command='pingv4', expected_ec=0,
timeout=30, interval=5, access_point_ip=None):
"""Check network connectivity between instances.
:param ips: list, list of ips
:param command: type string, key 'pingv4', 'pingv6' or 'arping'
:param expected_ec: type integer, exit code of command execution
:param timeout: wait to get expected result
:param interval: interval of executing command
:param access_point_ip: It is used if check via host
"""
ip_pair = {key: [ip for ip in ips if ip != key] for key in ips}
if access_point_ip:
check_connection_through_host(remote=access_point_ip,
ip_pair=ip_pair,
command=command,
result_of_command=expected_ec,
timeout=timeout,
interval=interval)
else:
check_connection_vms(ip_pair=ip_pair,
command=command,
result_of_command=expected_ec,
timeout=timeout,
interval=interval)
def create_and_assign_floating_ips(os_conn, instances):
"""Associate floating ips with specified instances.
:param os_conn: type object, openstack
:param instances: type list, instances
"""
fips = []
for instance in instances:
ip = os_conn.assign_floating_ip(instance).ip
fips.append(ip)
wait(lambda: icmp_ping(ip), timeout=60 * 5, interval=5)
return fips
def get_ssh_connection(ip, username, userpassword, timeout=30, port=22):
"""Get ssh to host.
:param ip: string, host ip to connect to
:param username: string, a username to use for authentication
:param userpassword: string, a password to use for authentication
:param timeout: timeout (in seconds) for the TCP connection
:param port: host port to connect to
"""
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, port=port, username=username,
password=userpassword, timeout=timeout)
return ssh
def execute(ssh_client, command):
"""Execute command on remote host.
:param ssh_client: SSHClient to instance
:param command: type string, command to execute
"""
channel = ssh_client.get_transport().open_session()
channel.exec_command(command)
result = {
'stdout': channel.recv(1024),
'stderr': channel.recv_stderr(1024),
'exit_code': channel.recv_exit_status()
}
return result
def remote_execute_command(instance1_ip, instance2_ip, command, wait=30):
"""Check execute remote command.
:param instance1_ip: string, instance ip connect from
:param instance2_ip: string, instance ip connect to
:param command: string, remote command
:param wait: integer, time to wait available ip of instances
"""
with get_ssh_connection(instance1_ip, *instance_creds) as ssh:
interm_transp = ssh.get_transport()
try:
logger.info('Opening channel between VMs {0} and {1}'.format(
instance1_ip, instance2_ip))
interm_chan = interm_transp.open_channel('direct-tcpip',
(instance2_ip, 22),
(instance1_ip, 0))
except Exception as e:
message = '{} Wait to update sg rules. Try to open channel again'
logger.info(message.format(e))
time.sleep(wait)
interm_chan = interm_transp.open_channel('direct-tcpip',
(instance2_ip, 22),
(instance1_ip, 0))
transport = paramiko.Transport(interm_chan)
transport.start_client()
logger.info("Passing authentication to VM")
transport.auth_password(*instance_creds)
channel = transport.open_session()
channel.get_pty()
channel.fileno()
channel.exec_command(command)
logger.debug("Receiving exit_code, stdout, stderr")
result = {
'stdout': channel.recv(1024),
'stderr': channel.recv_stderr(1024),
'exit_code': channel.recv_exit_status()
}
logger.debug('Command: {}'.format(command))
logger.debug(pretty_log(result))
logger.debug('Closing channel''')
channel.close()
return result
def get_role(os_conn, role_name):
"""Get role by name."""
role_list = os_conn.keystone.roles.list()
for role in role_list:
if role.name == role_name:
return role
def add_role_to_user(os_conn, user_name, role_name, tenant_name):
"""Assign role to user.
:param os_conn: type object
:param user_name: type string,
:param role_name: type string
:param tenant_name: type string
"""
tenant_id = os_conn.get_tenant(tenant_name).id
user_id = os_conn.get_user(user_name).id
role_id = get_role(os_conn, role_name).id
os_conn.keystone.roles.add_user_role(user_id, role_id, tenant_id)
def check_service(ip, commands):
"""Check that required nova services are running on controller.
:param ip: ip address of node
:param commands: type list, nova commands to execute on controller,
example of commands:
['nova-manage service list | grep vcenter-vmcluster1']
"""
ssh_manager = SSHManager()
ssh_manager.check_call(ip=ip, command='source openrc')
for cmd in commands:
wait(lambda:
':-)' in ssh_manager.check_call(ip=ip, command=cmd).stdout[-1],
timeout=200)
def create_instances(os_conn, nics, vm_count=1,
security_groups=None, available_hosts=None,
flavor_name='m1.micro'):
"""Create VMs on available hypervisors.
:param os_conn: type object, openstack
:param vm_count: type integer, count of VMs to create
:param nics: type dictionary, neutron networks to assign to instance
:param security_groups: list of security group names
:param available_hosts: available hosts for creating instances
:param flavor_name: name of flavor
"""
def find_by_name(objects, name):
for obj in objects:
if obj.name == name:
return obj
# Get list of available images, flavors and hypervisors
instances = []
images = os_conn.nova.images.list()
flavor = find_by_name(os_conn.nova.flavors.list(), flavor_name)
if not available_hosts:
available_hosts = os_conn.nova.services.list(binary='nova-compute')
for host in available_hosts:
image = find_by_name(images, zone_image_maps[host.zone])
instance = os_conn.nova.servers.create(
flavor=flavor,
name='test_{0}'.format(image.name),
image=image,
min_count=vm_count,
availability_zone='{0}:{1}'.format(host.zone, host.host),
nics=nics, security_groups=security_groups)
instances.append(instance)
return instances
def verify_instance_state(os_conn, instances=None, expected_state='ACTIVE',
boot_timeout=300):
"""Verify that current state of each instance/s is expected.
:param os_conn: type object, openstack
:param instances: type list, list of created instances
:param expected_state: type string, expected state of instance
:param boot_timeout: type int, time in seconds to build instance
"""
if not instances:
instances = os_conn.nova.servers.list()
for instance in instances:
wait(lambda:
os_conn.get_instance_detail(instance).status == expected_state,
timeout=boot_timeout,
timeout_msg='Timeout is reached. '
'Current state of VM {0} is {1}.'
'Expected state is {2}'.format(
instance.name,
os_conn.get_instance_detail(instance).status,
expected_state))
def create_access_point(os_conn, nics, security_groups, host_num=0):
"""Create access point.
Creating instance with floating ip as access point to instances
with private ip in the same network.
:param os_conn: type object, openstack
:param nics: type dictionary, neutron networks to assign to instance
:param security_groups: list of security group names
:param host_num: index of the host
"""
# Get the host
host = os_conn.nova.services.list(binary='nova-compute')[host_num]
access_point = create_instances( # create access point server
os_conn=os_conn, nics=nics,
vm_count=1,
security_groups=security_groups,
available_hosts=[host]).pop()
verify_instance_state(os_conn)
access_point_ip = os_conn.assign_floating_ip(
access_point, use_neutron=True)['floating_ip_address']
wait(lambda: tcp_ping(access_point_ip, 22), timeout=60 * 5, interval=5)
return access_point, access_point_ip
def add_gateway_ip(os_conn, subnet_id, ip):
"""Add gateway ip for subnet."""
os_conn.neutron.update_subnet(subnet_id, {'subnet': {'gateway_ip': ip}})
def remove_router_interface(os_conn, router_id, subnet_id):
"""Remove subnet interface from router."""
os_conn.neutron.remove_interface_router(
router_id, {"router_id": router_id, "subnet_id": subnet_id})

View File

@ -1,81 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
import os
from fuelweb_test.settings import get_var_as_bool
from fuelweb_test.settings import iface_alias
from fuelweb_test.settings import NEUTRON_SEGMENT_TYPE
HALF_MIN_WAIT = 30 # 30 seconds
WAIT_FOR_COMMAND = 60 * 3 # 3 minutes
WAIT_FOR_LONG_DEPLOY = 60 * 180 # 180 minutes
EXT_IP = '8.8.8.8' # Google DNS ^_^
PRIVATE_NET = os.environ.get('PRIVATE_NET', 'admin_internal_net')
ADMIN_NET = os.environ.get('ADMIN_NET', 'admin_floating_net')
DEFAULT_ROUTER_NAME = os.environ.get('DEFAULT_ROUTER_NAME', 'router04')
METADATA_IP = os.environ.get('METADATA_IP', '169.254.169.254')
VM_USER = 'cirros'
VM_PASS = 'cubswin:)'
AZ_VCENTER1 = 'vcenter'
AZ_VCENTER2 = 'vcenter2'
FLAVOR_NAME = 'm1.micro128'
PLUGIN_NAME = os.environ.get('PLUGIN_NAME', 'nsx-t')
NSXT_PLUGIN_PATH = os.environ.get('NSXT_PLUGIN_PATH')
NSXT_PLUGIN_VERSION = os.environ.get('NSXT_PLUGIN_VERSION', '1.0.0')
NSXT_MANAGERS_IP = os.environ.get('NSXT_MANAGERS_IP')
NSXT_USER = os.environ.get('NSXT_USER')
assigned_networks = {
iface_alias('eth0'): ['fuelweb_admin', 'private'],
iface_alias('eth1'): ['public'],
iface_alias('eth2'): ['management'],
iface_alias('eth4'): ['storage']
}
cluster_settings = {
'net_provider': 'neutron',
'assign_to_all_nodes': True,
'net_segment_type': NEUTRON_SEGMENT_TYPE
}
plugin_configuration = {
'insecure/value': get_var_as_bool(os.environ.get('NSXT_INSECURE'), True),
'nsx_api_managers/value': NSXT_MANAGERS_IP,
'nsx_api_user/value': NSXT_USER,
'nsx_api_password/value': os.environ.get('NSXT_PASSWORD'),
'default_overlay_tz_uuid/value': os.environ.get('NSXT_OVERLAY_TZ_UUID'),
'default_vlan_tz_uuid/value': os.environ.get('NSXT_VLAN_TZ_UUID'),
'default_tier0_router_uuid/value': os.environ.get(
'NSXT_TIER0_ROUTER_UUID'),
'default_edge_cluster_uuid/value': os.environ.get(
'NSXT_EDGE_CLUSTER_UUID'),
'uplink_profile_uuid/value': os.environ.get('NSXT_UPLINK_PROFILE_UUID'),
'controller_ip_pool_uuid/value': os.environ.get(
'NSXT_CONTROLLER_IP_POOL_UUID'),
'controller_pnics_pairs/value': os.environ.get(
'NSXT_CONTROLLER_PNICS_PAIRS'),
'compute_ip_pool_uuid/value': os.environ.get('NSXT_COMPUTE_IP_POOL_UUID'),
'compute_pnics_pairs/value': os.environ.get('NSXT_COMPUTE_PNICS_PAIRS'),
'floating_ip_range/value': os.environ.get('NSXT_FLOATING_IP_RANGE'),
'floating_net_cidr/value': os.environ.get('NSXT_FLOATING_NET_CIDR'),
'internal_net_cidr/value': os.environ.get('NSXT_INTERNAL_NET_CIDR'),
'floating_net_gw/value': os.environ.get('NSXT_FLOATING_NET_GW'),
'internal_net_dns/value': os.environ.get('NSXT_INTERNAL_NET_DNS')
}

View File

@ -1,54 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
from functools import wraps
from fuelweb_test import logger
def find_first(seq, predicate):
"""Find the first item of sequence for which predicate is performed."""
return next((x for x in seq if predicate(x)), None)
class ShowPos(object):
"""Print func name and its parameters for each call."""
@staticmethod
def deco(f):
"""Logger decorator."""
def wrapper(*args, **kwargs):
logger.debug("Call {0}({1}, {2})".format(f.__name__, args, kwargs))
return f(*args, **kwargs)
return wrapper
def __getattribute__(self, name):
"""Log by attributes."""
attr = object.__getattribute__(self, name)
if callable(attr):
return ShowPos.deco(attr)
else:
return attr
def show_pos(f):
"""Wrapper shows current POSition in debug output."""
@wraps(f)
def wrapper(*args, **kwargs):
logger.debug('Call {func}({args}, {kwargs})'.format(func=f.__name__,
args=args,
kwargs=kwargs))
return f(*args, **kwargs)
return wrapper

View File

@ -1,69 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
import os
import re
import sys
from nose.plugins import Plugin
from paramiko.transport import _join_lingering_threads
class CloseSSHConnectionsPlugin(Plugin):
"""Closes all paramiko's ssh connections after each test case.
Plugin fixes proboscis disability to run cleanup of any kind.
'afterTest' calls _join_lingering_threads function from paramiko,
which stops all threads (set the state to inactive and join for 10s)
"""
name = 'closesshconnections'
def options(self, parser, env=os.environ):
super(CloseSSHConnectionsPlugin, self).options(parser, env=env)
def configure(self, options, conf):
super(CloseSSHConnectionsPlugin, self).configure(options, conf)
self.enabled = True
def afterTest(self, *args, **kwargs):
_join_lingering_threads()
def import_tests():
from tests import test_plugin_nsxt # noqa
from tests import test_plugin_system # noqa
from tests import test_plugin_integration # noqa
from tests import test_plugin_scale # noqa
from tests import test_plugin_failover # noqa
def run_tests():
from proboscis import TestProgram # noqa
import_tests()
# Run Proboscis and exit.
TestProgram(addplugins=[CloseSSHConnectionsPlugin()]).run_and_exit()
if __name__ == '__main__':
sys.path.append(sys.path[0] + "/fuel-qa")
import_tests()
from fuelweb_test.helpers.patching import map_test
if any(re.search(r'--group=patching_master_tests', arg)
for arg in sys.argv):
map_test('master')
elif any(re.search(r'--group=patching.*', arg) for arg in sys.argv):
map_test('environment')
run_tests()

View File

@ -1,182 +0,0 @@
---
template:
devops_settings:
aliases:
dynamic_address_pool:
- &pool_default !os_env POOL_DEFAULT, 10.109.0.0/16:24
default_interface_model:
- &interface_model !os_env INTERFACE_MODEL, e1000
rack-01-slave-interfaces: &rack-01-slave-interfaces
- label: eth0
l2_network_device: admin # Libvirt bridge name. It is *NOT* Nailgun networks
interface_model: *interface_model
- label: eth1
l2_network_device: public
interface_model: *interface_model
- label: eth2
l2_network_device: management
interface_model: *interface_model
- label: eth3
l2_network_device: private
interface_model: *interface_model
- label: eth4
l2_network_device: storage
interface_model: *interface_model
rack-01-slave-network_config: &rack-01-slave-network_config
eth0:
networks:
- fuelweb_admin
eth1:
networks:
- public
eth2:
networks:
- management
eth3:
networks:
- private
eth4:
networks:
- storage
rack-01-slave-node-params: &rack-01-slave-node-params
vcpu: !os_env SLAVE_NODE_CPU, 4
memory: !os_env SLAVE_NODE_MEMORY, 8192
boot:
- network
- hd
volumes:
- name: system
capacity: !os_env NODE_VOLUME_SIZE, 150
format: qcow2
interfaces: *rack-01-slave-interfaces
network_config: *rack-01-slave-network_config
rack-02-slave-node-params: &rack-02-slave-node-params
vcpu: !os_env SLAVE_NODE_CPU, 4
memory: !os_env SLAVE_NODE_MEMORY, 4096
boot:
- network
- hd
volumes:
- name: system
capacity: !os_env NODE_VOLUME_SIZE, 150
format: qcow2
interfaces: *rack-01-slave-interfaces
network_config: *rack-01-slave-network_config
env_name: !os_env ENV_NAME
address_pools:
# Network pools used by the environment
fuelweb_admin-pool01:
net: *pool_default
params:
tag: 0
public-pool01:
net: *pool_default
params:
tag: 0
storage-pool01:
net: *pool_default
params:
tag: 101
management-pool01:
net: *pool_default
params:
tag: 102
private-pool01:
net: *pool_default
params:
tag: 103
groups:
- name: cat
driver:
name: devops.driver.libvirt.libvirt_driver
params:
connection_string: !os_env CONNECTION_STRING, qemu:///system
storage_pool_name: !os_env STORAGE_POOL_NAME, default
stp: !os_env DRIVER_STP, True
hpet: !os_env DRIVER_HPET, False
use_host_cpu: !os_env DRIVER_USE_HOST_CPU, True
network_pools:
fuelweb_admin: fuelweb_admin-pool01
public: public-pool01
storage: storage-pool01
management: management-pool01
private: private-pool01
l2_network_devices: # Libvirt bridges. It is *NOT* Nailgun networks
admin:
address_pool: fuelweb_admin-pool01
dhcp: false
forward:
mode: nat
public:
address_pool: public-pool01
dhcp: false
forward:
mode: nat
storage:
address_pool: storage-pool01
dhcp: false
management:
address_pool: management-pool01
dhcp: false
private:
address_pool: private-pool01
dhcp: false
nodes:
- name: admin # Custom name of VM for Fuel admin node
role: fuel_master # Fixed role for Fuel master node properties
params:
vcpu: !os_env ADMIN_NODE_CPU, 2
memory: !os_env ADMIN_NODE_MEMORY, 8192
boot:
- hd
- cdrom # for boot from usb - without 'cdrom'
volumes:
- name: system
capacity: !os_env ADMIN_NODE_VOLUME_SIZE, 80
format: qcow2
- name: iso
source_image: !os_env ISO_PATH # if 'source_image' set, then volume capacity is calculated from it's size
format: raw
device: cdrom # for boot from usb - 'disk'
bus: ide # for boot from usb - 'usb'
interfaces:
- label: eth0
l2_network_device: admin # Libvirt bridge name. It is *NOT* a Nailgun network
interface_model: *interface_model
network_config:
eth0:
networks:
- fuelweb_admin
- name: slave-01
role: fuel_slave
params: *rack-01-slave-node-params
- name: slave-02
role: fuel_slave
params: *rack-01-slave-node-params
- name: slave-03
role: fuel_slave
params: *rack-01-slave-node-params
- name: slave-04
role: fuel_slave
params: *rack-02-slave-node-params
- name: slave-05
role: fuel_slave
params: *rack-02-slave-node-params

View File

@ -1,197 +0,0 @@
---
aliases:
dynamic_address_pool:
- &pool_default !os_env POOL_DEFAULT, 10.109.0.0/16:24
default_interface_model:
- &interface_model !os_env INTERFACE_MODEL, e1000
interfaces-configuration: &interfaces-configuration
- label: eth0
l2_network_device: admin
interface_model: *interface_model
- label: eth1
l2_network_device: public
interface_model: *interface_model
- label: eth2
l2_network_device: storage
interface_model: *interface_model
- label: eth3
l2_network_device: management
interface_model: *interface_model
- label: eth4
l2_network_device: private
interface_model: *interface_model
network_config: &network-configuration
eth0:
networks:
- admin
eth1:
networks:
- public
eth2:
networks:
- storage
eth3:
networks:
- management
eth4:
networks:
- private
controller-node-params: &controller-node-params
vcpu: !os_env CONTROLLER_NODE_CPU, 4
memory: !os_env CONTROLLER_NODE_MEMORY, 8192
boot:
- network
- hd
volumes:
- name: system
format: qcow2
capacity: !os_env NODE_VOLUME_SIZE, 150
interfaces: *interfaces-configuration
network_config: *network-configuration
slave-node-params: &slave-node-params
vcpu: !os_env SLAVE_NODE_CPU, 2
memory: !os_env SLAVE_NODE_MEMORY, 4096
boot:
- network
- hd
volumes:
- name: system
format: qcow2
capacity: !os_env NODE_VOLUME_SIZE, 150
interfaces: *interfaces-configuration
network_config: *network-configuration
template:
devops_settings:
env_name: !os_env ENV_NAME
address_pools:
admin-pool:
net: *pool_default
params:
vlan_start: 0
ip_reserved:
gateway: +1
l2_network_device: +1
ip_ranges:
default: [+2, -2]
public-pool:
net: *pool_default
params:
vlan_start: 0
ip_reserved:
gateway: +1
l2_network_device: +1
ip_ranges:
default: [+2, +127]
floating: [+128, -2]
storage-pool:
net: *pool_default
params:
vlan_start: 101
management-pool:
net: *pool_default
params:
vlan_start: 102
private-pool:
net: *pool_default
params:
vlan_start: 1000
vlan_end: 1030
groups:
- name: nsxt
driver:
name: devops.driver.libvirt
params:
connection_string: !os_env CONNECTION_STRING, qemu:///system
storage_pool_name: !os_env STORAGE_POOL_NAME, default
use_host_cpu: !os_env DRIVER_USE_HOST_CPU, True
enable_acpi: !os_env DRIVER_ENABLE_ACPI, True
enable_nwfilters: !os_env DRIVER_ENABLE_NWFILTERS, False
stp: True
hpet: True
network_pools:
admin: admin-pool
public: public-pool
storage: storage-pool
management: management-pool
private: private-pool
l2_network_devices:
admin:
address_pool: admin-pool
dhcp: false
# forward:
# mode: nat
public:
address_pool: public-pool
dhcp: false
# forward:
# mode: nat
storage:
address_pool: storage-pool
dhcp: false
management:
address_pool: management-pool
dhcp: false
private:
address_pool: private-pool
dhcp: false
nodes:
- name: admin
role: fuel_master
params:
vcpu: !os_env ADMIN_NODE_CPU, 2
memory: !os_env ADMIN_NODE_MEMORY, 8192
boot:
- hd
- cdrom
volumes:
- name: system
capacity: !os_env ADMIN_NODE_VOLUME_SIZE, 80
format: qcow2
- name: iso
source_image: !os_env ISO_PATH
format: raw
device: cdrom
bus: ide
interfaces:
- label: eth0
l2_network_device: admin
interface_model: *interface_model
network_config:
eth0:
networks:
- admin
- name: slave-01
role: fuel_slave
params: *controller-node-params
- name: slave-02
role: fuel_slave
params: *controller-node-params
- name: slave-03
role: fuel_slave
params: *controller-node-params
- name: slave-04
role: fuel_slave
params: *slave-node-params
- name: slave-05
role: fuel_slave
params: *slave-node-params

View File

@ -1,67 +0,0 @@
heat_template_version: 2013-05-23
description: >
HOT template to create a new neutron network plus a router to the public
network, and for deploying servers into the new network.
parameters:
admin_floating_net:
type: string
label: admin_floating_net
description: ID or name of public network for which floating IP addresses will be allocated
default: admin_floating_net
flavor:
type: string
label: flavor
description: Flavor to use for servers
default: m1.tiny
image:
type: string
label: image
description: Image to use for servers
default: TestVM-VMDK
resources:
private_net:
type: OS::Neutron::Net
properties:
name: net_1
private_subnet:
type: OS::Neutron::Subnet
properties:
network_id: { get_resource: private_net }
cidr: 10.0.0.0/29
dns_nameservers: [ 8.8.8.8, 8.8.4.4 ]
router:
type: OS::Neutron::Router
properties:
external_gateway_info:
network: { get_param: admin_floating_net }
router_interface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: router }
subnet_id: { get_resource: private_subnet }
master_image_server_port:
type: OS::Neutron::Port
properties:
network_id: { get_resource: private_net }
fixed_ips:
- subnet_id: { get_resource: private_subnet }
master_image_server:
type: OS::Nova::Server
properties:
name: instance_1
image: { get_param: image }
flavor: { get_param: flavor }
availability_zone: "vcenter"
networks:
- port: { get_resource: master_image_server_port }
outputs:
server_info:
value: { get_attr: [master_image_server, show ] }

View File

@ -1,14 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""

View File

@ -1,144 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
import os
from devops.helpers.ssh_client import SSHAuth
from proboscis.asserts import assert_true
from fuelweb_test import logger
from fuelweb_test.helpers import utils
from fuelweb_test.helpers.utils import pretty_log
from fuelweb_test.tests.base_test_case import TestBasic
from fuelweb_test.settings import SSH_IMAGE_CREDENTIALS
from helpers import settings
cirros_auth = SSHAuth(**SSH_IMAGE_CREDENTIALS)
class TestNSXtBase(TestBasic):
"""Base class for NSX-T plugin tests"""
def __init__(self):
super(TestNSXtBase, self).__init__()
self.default = settings
self.vcenter_az = 'vcenter'
self.vmware_image = 'TestVM-VMDK'
def get_configured_clusters(self, node_ip):
"""Get configured vcenter clusters moref id on controller.
:param node_ip: type string, ip of node
"""
cmd = r"sed -rn 's/^\s*cluster_moid\s*=\s*([^ ]+)\s*$/\1/p' " \
"/etc/neutron/plugin.ini"
clusters_id = self.ssh_manager.check_call(ip=node_ip,
cmd=cmd).stdout
return (clusters_id[-1]).rstrip().split(',')
def install_nsxt_plugin(self):
"""Download and install NSX-T plugin on master node.
:return: None
"""
master_ip = self.ssh_manager.admin_ip
utils.upload_tarball(ip=master_ip,
tar_path=self.default.NSXT_PLUGIN_PATH,
tar_target='/var')
utils.install_plugin_check_code(
ip=master_ip,
plugin=os.path.basename(self.default.NSXT_PLUGIN_PATH))
def enable_plugin(self, cluster_id, settings=None):
"""Enable NSX-T plugin on cluster.
:param cluster_id: cluster id
:param settings: settings in dict format
:return: None
"""
msg = "Plugin couldn't be enabled. Check plugin version. Test aborted"
settings = settings if settings else {}
checker = self.fuel_web.check_plugin_exists(cluster_id,
self.default.PLUGIN_NAME)
assert_true(checker, msg)
logger.info('Configure cluster with '
'following parameters: \n{}'.format(pretty_log(settings)))
self.fuel_web.update_plugin_settings(
cluster_id,
self.default.PLUGIN_NAME,
self.default.NSXT_PLUGIN_VERSION,
dict(self.default.plugin_configuration, **settings))
def reconfigure_cluster_interfaces(self, cluster_id):
# clear network mapping enp0s6 for all deployed nodes
nodes = self.fuel_web.client.list_cluster_nodes(cluster_id)
for node in nodes:
self.fuel_web.update_node_networks(node['id'],
settings.assigned_networks)
def delete_nsxt_plugin(self, failover=False):
"""Delete NSX-T plugin
:param failover: True if we expect that plugin won't be deleted
:return:
"""
plugin_name = self.default.PLUGIN_NAME
plugin_vers = self.default.NSXT_PLUGIN_VERSION
tmp = "Plugin '{0}' {1} removed"
msg = tmp.format(plugin_name, 'was' if failover else "wasn't")
cmd = 'fuel plugins --remove {0}=={1}'.format(plugin_name, plugin_vers)
self.ssh_manager.check_call(
ip=self.ssh_manager.admin_ip,
command=cmd,
expected=[1 if failover else 0],
raise_on_err=not failover
)
def _get_controller_with_vip(self):
"""Return name of controller with VIPs."""
for node in self.env.d_env.nodes().slaves:
ng_node = self.fuel_web.get_nailgun_node_by_devops_node(node)
if ng_node['online'] and 'controller' in ng_node['roles']:
hosts_vip = self.fuel_web.get_pacemaker_resource_location(
ng_node['devops_name'], 'vip__management')
logger.info('Now primary controller is '
'{}'.format(hosts_vip[0].name))
return hosts_vip[0].name
return True
def ping_from_instance(self, src_floating_ip, dst_ip, primary,
size=56, count=1):
"""Verify ping between instances.
:param src_floating_ip: floating ip address of instance
:param dst_ip: destination ip address
:param primary: name of the primary controller
:param size: number of data bytes to be sent
:param count: number of packets to be sent
"""
with self.fuel_web.get_ssh_for_node(primary) as ssh:
command = "ping -s {0} -c {1} {2}".format(size, count,
dst_ip)
ping = ssh.execute_through_host(
hostname=src_floating_ip,
cmd=command,
auth=cirros_auth
)
logger.info("Ping result is {}".format(ping['stdout_str']))
return 0 == ping['exit_code']

View File

@ -1,193 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
from proboscis import test
from proboscis.asserts import assert_true
from devops.helpers.helpers import tcp_ping
from devops.helpers.helpers import wait
from fuelweb_test.helpers.os_actions import OpenStackActions
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test.settings import DEPLOYMENT_MODE
from fuelweb_test.settings import SERVTEST_PASSWORD
from fuelweb_test.settings import SERVTEST_TENANT
from fuelweb_test.settings import SERVTEST_USERNAME
from fuelweb_test.tests.base_test_case import SetupEnvironment
from system_test import logger
from tests.base_plugin_test import TestNSXtBase
from tests.test_plugin_nsxt import TestNSXtBVT
@test(groups=['nsxt_plugin', 'nsxt_failover'])
class TestNSXtFailover(TestNSXtBase):
"""NSX-t failover automated tests"""
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=['nsxt_uninstall_negative'])
@log_snapshot_after_test
def nsxt_uninstall_negative(self):
"""Check plugin can not be removed while it is enabled for environment.
Scenario:
1. Install NSX-T plugin on Fuel Master node with 5 slaves.
2. Create new environment with enabled NSX-T plugin.
3. Try to delete plugin via cli from master node.
Duration: 10 min
"""
# Install NSX-T plugin on Fuel Master node with 5 slaves
self.show_step(1)
self.env.revert_snapshot('ready_with_5_slaves')
self.install_nsxt_plugin()
# Create new environment with enabled NSX-T plugin
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings=self.default.cluster_settings,
configure_ssl=False)
self.enable_plugin(cluster_id)
# Try to delete plugin via cli from master node
self.show_step(3)
self.delete_nsxt_plugin(failover=True)
@test(depends_on=[TestNSXtBVT.nsxt_bvt],
groups=['nsxt_shutdown_controller'])
@log_snapshot_after_test
def nsxt_shutdown_controller(self):
"""Check plugin functionality after shutdown primary controller.
Scenario:
1. Get access to OpenStack.
2. Create VMs and check connectivity to outside world
from VM.
3. Shutdown primary controller.
4. Ensure that VIPs are moved to another controller.
5. Ensure that there is a connectivity to outside world from
created VM.
6. Create new network and attach it to default router.
7. Create VMs with new network and check network
connectivity via ICMP.
Duration: 180 min
"""
# Get access to OpenStack
self.show_step(1)
cluster_id = self.fuel_web.get_last_created_cluster()
os_conn = OpenStackActions(
self.fuel_web.get_public_vip(cluster_id),
SERVTEST_USERNAME,
SERVTEST_PASSWORD,
SERVTEST_TENANT)
# Create vcenter VM and check connectivity to outside world from VM
self.show_step(2)
image = os_conn.get_image(self.vmware_image)
net = os_conn.get_network(self.default.PRIVATE_NET)
sec_group = os_conn.create_sec_group_for_ssh()
vms=[]
vms.append(os_conn.create_server(
net_id=net['id'],
security_groups=[sec_group]))
vms.append(os_conn.create_server(
availability_zone=self.vcenter_az,
image=image,
net_id=net['id'],
security_groups=[sec_group]))
ips = []
for vm in vms:
floating = os_conn.assign_floating_ip(vm)
wait(lambda: tcp_ping(floating.ip, 22),
timeout=180,
timeout_msg="Node {ip} is not accessible by SSH.".format(
ip=floating.ip))
ips.append(floating.ip)
vip_contr = self._get_controller_with_vip()
for ip in ips:
logger.info('Check connectivity from {0}'.format(ip))
assert_true(self.ping_from_instance(ip,
'8.8.8.8',
vip_contr),
'Ping failed')
# Shutdown primary controller
self.show_step(3)
primary_ctrl_devops = self.fuel_web.get_nailgun_primary_node(
self.env.d_env.nodes().slaves[0])
self.fuel_web.warm_shutdown_nodes([primary_ctrl_devops])
# Ensure that VIPs are moved to another controller
self.show_step(4)
vip_contr_new = self._get_controller_with_vip()
assert_true(vip_contr_new and vip_contr_new != vip_contr,
'VIPs have not been moved to another controller')
logger.info('VIPs have been moved to another controller')
# Ensure that there is a connectivity to outside world from created VM
self.show_step(5)
for ip in ips:
logger.info('Check connectivity from {0}'.format(ip))
assert_true(self.ping_from_instance(ip,
'8.8.8.8',
vip_contr_new),
'Ping failed')
# Create new network and attach it to default router
self.show_step(6)
net_1 = os_conn.create_network(network_name='net_1')['network']
subnet_1 = os_conn.create_subnet(
subnet_name='subnet_1',
network_id=net_1['id'],
cidr='192.168.77.0/24')
default_router = os_conn.get_router(os_conn.get_network(
self.default.ADMIN_NET))
os_conn.add_router_interface(router_id=default_router['id'],
subnet_id=subnet_1['id'])
# Create vCenter VM with new network and check ICMP connectivity
self.show_step(7)
vms=[]
vms.append(os_conn.create_server(
net_id=net_1['id'],
security_groups=[sec_group]))
vms.append(os_conn.create_server(
availability_zone=self.vcenter_az,
image=image,
net_id=net_1['id'],
security_groups=[sec_group]))
ips = []
for vm in vms:
floating = os_conn.assign_floating_ip(vm)
wait(lambda: tcp_ping(floating.ip, 22),
timeout=180,
timeout_msg="Node {ip} is not accessible by SSH.".format(
ip=floating.ip))
ips.append(floating.ip)
for ip in ips:
logger.info('Check connectivity from {0}'.format(ip))
assert_true(self.ping_from_instance(ip,
'8.8.8.8',
vip_contr_new),
'Ping failed')

View File

@ -1,175 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
from proboscis import test
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test.settings import DEPLOYMENT_MODE
from fuelweb_test.tests.base_test_case import SetupEnvironment
from tests.base_plugin_test import TestNSXtBase
@test(groups=['nsxt_plugin', 'nsxt_integration'])
class TestNSXtIntegration(TestNSXtBase):
"""Tests from test plan that have been marked as 'Automated'."""
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=['nsxt_ceilometer'])
@log_snapshot_after_test
def nsxt_ceilometer(self):
"""Check environment deployment with Fuel NSX-T plugin and Ceilometer.
Scenario:
1. Install NSX-T plugin to Fuel Master node with 5 slaves.
2. Create new environment with the following parameters:
* Compute: KVM/QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: Ceilometer
3. Add nodes with the following roles:
* Controller + Mongo
* Controller + Mongo
* Controller + Mongo
* Compute-vmware
* Compute
4. Configure interfaces on nodes.
5. Enable plugin and configure network settings.
6. Configure VMware vCenter Settings.
Add 2 vSphere clusters and configure Nova Compute instances on
controllers and compute-vmware.
7. Verify networks.
8. Deploy cluster.
9. Run OSTF.
Duration: 180
"""
# Install NSX-T plugin to Fuel Master node with 5 slaves
self.show_step(1)
self.env.revert_snapshot('ready_with_5_slaves')
self.install_nsxt_plugin()
self.show_step(2) # Create new environment with Ceilometer
settings = self.default.cluster_settings
settings['ceilometer'] = True
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings=settings)
self.show_step(3) # Add nodes
self.fuel_web.update_nodes(cluster_id,
{'slave-01': ['controller', 'mongo'],
'slave-02': ['controller', 'mongo'],
'slave-03': ['controller', 'mongo'],
'slave-04': ['compute-vmware'],
'slave-05': ['compute']})
self.show_step(4) # Configure interfaces on nodes
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(5) # Enable plugin and configure network settings
self.enable_plugin(cluster_id)
# Configure VMware settings. 2 clusters, 2 Nova Compute instances:
# 1 on controllers and 1 on compute-vmware
self.show_step(6)
target_node = self.fuel_web.get_nailgun_node_by_name('slave-04')
self.fuel_web.vcenter_configure(cluster_id,
target_node_2=target_node['hostname'],
multiclusters=True)
self.show_step(7) # Verify networks
self.fuel_web.verify_network(cluster_id)
self.show_step(8) # Deploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(9) # Run OSTF
self.fuel_web.run_ostf(cluster_id, timeout=3600,
test_sets=['smoke', 'sanity', 'ha',
'tests_platform'])
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=['nsxt_ceph'])
@log_snapshot_after_test
def nsxt_ceph(self):
"""Check environment deployment with Fuel NSX-T plugin and Ceph.
Scenario:
1. Install NSX-T plugin to Fuel Master node with 5 slaves.
2. Create new environment with the following parameters:
* Compute: KVM/QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: Ceph
* Additional services: default
3. Add nodes with the following roles:
* Controller
* Ceph-OSD
* Ceph-OSD
* Ceph-OSD
* Compute
4. Configure interfaces on nodes.
5. Enable plugin and configure network settings.
6. Configure VMware vCenter Settings. Add 1 vSphere cluster and
configure Nova Compute instance on controller.
7. Verify networks.
8. Deploy cluster.
9. Run OSTF.
Duration: 180
"""
# Install NSX-T plugin to Fuel Master node with 5 slaves
self.show_step(1)
self.env.revert_snapshot('ready_with_5_slaves')
self.install_nsxt_plugin()
self.show_step(2) # Create new environment with Ceph
settings = self.default.cluster_settings
settings['volumes_lvm'] = False
settings['volumes_ceph'] = True
settings['images_ceph'] = True
settings['ephemeral_ceph'] = True
settings['objects_ceph'] = True
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings=settings)
self.show_step(3) # Add nodes
self.fuel_web.update_nodes(cluster_id,
{'slave-01': ['controller'],
'slave-02': ['ceph-osd'],
'slave-03': ['ceph-osd'],
'slave-04': ['ceph-osd'],
'slave-05': ['compute']})
self.show_step(4) # Configure interfaces on nodes
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(5) # Enable plugin and configure network settings
self.enable_plugin(cluster_id)
# Configure VMware settings. 1 cluster, 1 Compute instance: controller
self.show_step(6)
self.fuel_web.vcenter_configure(cluster_id)
self.show_step(7) # Verify networks
self.fuel_web.verify_network(cluster_id)
self.show_step(8) # Deploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(9) # Run OSTF
self.fuel_web.run_ostf(cluster_id)

View File

@ -1,209 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
from proboscis import test
from proboscis.asserts import assert_true
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test.settings import DEPLOYMENT_MODE
from fuelweb_test.tests.base_test_case import SetupEnvironment
from tests.base_plugin_test import TestNSXtBase
@test(groups=["nsxt_plugin", "nsxt_smoke_scenarios"])
class TestNSXtSmoke(TestNSXtBase):
"""Tests from test plan that have been marked as 'Automated'."""
@test(depends_on=[SetupEnvironment.prepare_slaves_3],
groups=["nsxt_install"])
@log_snapshot_after_test
def nsxt_install(self):
"""Check that plugin can be installed.
Scenario:
1. Connect to the Fuel master node via ssh.
2. Upload NSX-T plugin.
3. Install NSX-T plugin.
4. Run command 'fuel plugins'.
5. Check name, version and package version of plugin.
Duration 30 min
"""
self.env.revert_snapshot('ready_with_3_slaves')
self.show_step(1)
self.show_step(2)
self.show_step(3)
self.install_nsxt_plugin()
self.show_step(4)
output = self.ssh_manager.execute_on_remote(
ip=self.ssh_manager.admin_ip, cmd='fuel plugins list'
)['stdout'].pop().split(' ')
self.show_step(5)
msg = "Plugin '{0}' is not installed.".format(self.default.PLUGIN_NAME)
# check name
assert_true(self.default.PLUGIN_NAME in output, msg)
# check version
assert_true(self.default.NSXT_PLUGIN_VERSION in output, msg)
self.env.make_snapshot("nsxt_install", is_make=True)
@test(depends_on=[nsxt_install],
groups=["nsxt_uninstall"])
@log_snapshot_after_test
def nsxt_uninstall(self):
"""Check that NSX-T plugin can be removed.
Scenario:
1. Revert to snapshot nsxt_install
2. Remove NSX-T plugin
3. Verify that plugin is removed.
Duration: 5 min
"""
self.show_step(1)
self.env.revert_snapshot("nsxt_install")
self.show_step(2)
self.delete_nsxt_plugin()
self.show_step(3)
plugin_name = self.default.PLUGIN_NAME
output = self.ssh_manager.execute_on_remote(
ip=self.ssh_manager.admin_ip,
cmd='fuel plugins list')['stdout'].pop().split(' ')
assert_true(plugin_name not in output,
"Plugin '{0}' is not removed".format(plugin_name))
@test(depends_on=[nsxt_install],
groups=["nsxt_smoke"])
@log_snapshot_after_test
def nsxt_smoke(self):
"""Deploy cluster with NSXt Plugin and compute node.
Scenario:
1. Upload the plugin to master node.
2. Create cluster.
3. Add nodes with the following roles:
* controller
* compute
4. Configure NSX-t for that cluster.
5. Deploy cluster with plugin.
6. Run OSTF.
Duration 90 min
"""
self.show_step(1)
self.env.revert_snapshot('nsxt_install')
self.show_step(2)
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings=self.default.cluster_settings,
configure_ssl=False)
self.show_step(3)
self.fuel_web.update_nodes(
cluster_id,
{'slave-01': ['controller'],
'slave-02': ['compute']}
)
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(4)
self.enable_plugin(cluster_id)
self.show_step(5)
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(6)
self.fuel_web.run_ostf(cluster_id=cluster_id,
test_sets=['smoke', 'sanity'])
@test(groups=["nsxt_plugin", "nsxt_bvt_scenarios"])
class TestNSXtBVT(TestNSXtBase):
"""NSX-t BVT scenarios"""
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=["nsxt_bvt"])
@log_snapshot_after_test
def nsxt_bvt(self):
"""Deploy ha cluster with plugin and KVM + vCenter.
Scenario:
1. Upload plugins to the master node.
2. Create cluster with vcenter.
3. Add nodes with the following roles:
* controller
* controller
* controller
* compute-vmware + cinder-vmware
* compute + cinder
4. Configure vcenter.
5. Configure NSXt for that cluster.
6. Deploy cluster.
7. Run OSTF.
Duration 3 hours
"""
self.env.revert_snapshot("ready_with_5_slaves")
self.show_step(1)
self.install_nsxt_plugin()
self.show_step(2)
settings = self.default.cluster_settings
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings=settings,
configure_ssl=False)
self.show_step(3)
self.fuel_web.update_nodes(
cluster_id,
{'slave-01': ['controller'],
'slave-02': ['controller'],
'slave-03': ['controller'],
'slave-04': ['compute-vmware', 'cinder-vmware'],
'slave-05': ['compute', 'cinder']}
)
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(4)
target_node_2 = \
self.fuel_web.get_nailgun_node_by_name('slave-04')['hostname']
self.fuel_web.vcenter_configure(cluster_id,
multiclusters=True,
target_node_2=target_node_2)
self.show_step(5)
self.enable_plugin(cluster_id)
self.show_step(6)
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(7)
self.fuel_web.run_ostf(
cluster_id=cluster_id, test_sets=['smoke', 'sanity', 'ha'])

View File

@ -1,375 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
import itertools
from proboscis import test
from proboscis.asserts import assert_true
from fuelweb_test.helpers import os_actions
from fuelweb_test.helpers.decorators import log_snapshot_after_test
from fuelweb_test.settings import DEPLOYMENT_MODE
from fuelweb_test.settings import SERVTEST_PASSWORD
from fuelweb_test.settings import SERVTEST_TENANT
from fuelweb_test.settings import SERVTEST_USERNAME
from fuelweb_test.tests.base_test_case import SetupEnvironment
from helpers import openstack as os_help
from tests.base_plugin_test import TestNSXtBase
@test(groups=['nsxt_plugin', 'nsxt_scale'])
class TestNSXtScale(TestNSXtBase):
"""Tests from test plan that have been marked as 'Automated'."""
@test(depends_on=[SetupEnvironment.prepare_slaves_9],
groups=['nsxt_add_delete_controller'])
@log_snapshot_after_test
def nsxt_add_delete_controller(self):
"""Check functionality when controller has been removed or added.
Scenario:
1. Install NSX-T plugin to Fuel Master node with 9 slaves.
2. Create new environment with the following parameters:
* Compute: KVM/QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
3. Add nodes with the following roles:
* Controller
* Compute
4. Configure interfaces on nodes.
5. Enable plugin and configure network settings.
6. Configure VMware vCenter Settings. Add vSphere cluster and
configure Nova Compute instance on conrollers.
7. Deploy cluster.
8. Run OSTF.
9. Launch 1 vcenter instance and 1 KVM instance.
10. Add 2 controller nodes.
11. Redeploy cluster.
12. Check that all instances are in place.
13. Run OSTF.
14. Remove 2 controller nodes.
15. Redeploy cluster.
16. Check that all instances are in place.
17. Run OSTF.
Duration: 180 min
"""
# Install NSX-T plugin to Fuel Master node with 9 slaves
self.show_step(1)
self.env.revert_snapshot('ready_with_9_slaves')
self.install_nsxt_plugin()
self.show_step(2) # Create new environment
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings=self.default.cluster_settings,
configure_ssl=False)
self.show_step(3) # Add nodes
self.fuel_web.update_nodes(cluster_id,
{'slave-01': ['controller'],
'slave-04': ['compute']})
self.show_step(4) # Configure interfaces on nodes
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(5) # Enable plugin and configure network settings
self.enable_plugin(cluster_id)
# Configure VMware settings. 1 cluster, 1 Nova Compute on controllers
self.show_step(6)
self.fuel_web.vcenter_configure(cluster_id)
self.show_step(7) # Deploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(8) # Run OSTF
self.fuel_web.run_ostf(cluster_id)
# Launch 1 vcenter instance and 1 KVM instance
self.show_step(9)
os_conn = os_actions.OpenStackActions(
self.fuel_web.get_public_vip(cluster_id),
SERVTEST_USERNAME,
SERVTEST_PASSWORD,
SERVTEST_TENANT)
os_help.create_instance(os_conn)
os_help.create_instance(os_conn, az='vcenter')
self.show_step(10) # Add 2 controller nodes
self.fuel_web.update_nodes(cluster_id, {'slave-02': ['controller'],
'slave-03': ['controller']})
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(11) # Redeploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(12) # Check that all instances are in place
os_help.check_instances_state(os_conn)
self.show_step(13) # Run OSTF
self.fuel_web.run_ostf(cluster_id)
self.show_step(14) # Remove 2 controller nodes
self.fuel_web.update_nodes(cluster_id,
{'slave-01': ['controller'],
'slave-02': ['controller']},
False, True)
self.show_step(15) # Redeploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(16) # Check that all instances are in place
os_help.check_instances_state(os_conn)
self.show_step(17) # Run OSTF
self.fuel_web.run_ostf(cluster_id)
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=['nsxt_add_delete_compute_node'])
@log_snapshot_after_test
def nsxt_add_delete_compute_node(self):
"""Verify functionality when compute node has been removed or added.
Scenario:
1. Install NSX-T plugin to Fuel Master node with 5 slaves.
2. Create new environment with the following parameters:
* Compute: KVM/QEMU
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: default
3. Add nodes with the following roles:
* Controller
* Controller
* Controller
* Compute
4. Configure interfaces on nodes.
5. Enable plugin and configure network settings.
6. Deploy cluster.
7. Run OSTF.
8. Launch KVM vm.
9. Add node with compute role.
10. Redeploy cluster.
11. Check that instance is in place.
12. Run OSTF.
13. Remove node with compute role.
14. Redeploy cluster.
15. Check that instance is in place.
16. Run OSTF.
Duration: 180min
"""
# Install NSX-T plugin to Fuel Master node with 5 slaves
self.show_step(1)
self.env.revert_snapshot('ready_with_5_slaves')
self.install_nsxt_plugin()
self.show_step(2) # Create new environment
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings=self.default.cluster_settings,
configure_ssl=False)
self.show_step(3) # Add nodes
self.fuel_web.update_nodes(cluster_id,
{'slave-01': ['controller'],
'slave-02': ['controller'],
'slave-03': ['controller'],
'slave-04': ['compute']})
self.show_step(4) # Configure interfaces on nodes
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(5) # Enable plugin and configure network settings
self.enable_plugin(cluster_id)
self.show_step(6) # Deploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(7) # Run OSTF
self.fuel_web.run_ostf(cluster_id)
self.show_step(8) # Launch KVM vm
os_conn = os_actions.OpenStackActions(
self.fuel_web.get_public_vip(cluster_id),
SERVTEST_USERNAME,
SERVTEST_PASSWORD,
SERVTEST_TENANT)
os_help.create_instance(os_conn)
self.show_step(9) # Add node with compute role
self.fuel_web.update_nodes(cluster_id, {'slave-05': ['compute']})
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(10) # Redeploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(11) # Check that instance is in place
os_help.check_instances_state(os_conn)
self.show_step(12) # Run OSTF
self.fuel_web.run_ostf(cluster_id)
self.show_step(13) # Remove node with compute role
self.fuel_web.update_nodes(cluster_id,
{'slave-04': ['compute']},
False, True)
self.show_step(14) # Redeploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(15) # Check that instance is in place
os_help.check_instances_state(os_conn)
self.show_step(16) # Run OSTF
self.fuel_web.run_ostf(cluster_id)
@test(depends_on=[SetupEnvironment.prepare_slaves_5],
groups=['nsxt_add_delete_compute_vmware_node'])
@log_snapshot_after_test
def nsxt_add_delete_compute_vmware_node(self):
"""Verify functionality when compute-vmware has been removed or added.
Scenario:
1. Install NSX-T plugin to Fuel Master node with 5 slaves.
2. Create new environment with the following parameters:
* Compute: KVM/QEMU with vCenter
* Networking: Neutron with NSX-T plugin
* Storage: default
* Additional services: default
3. Add nodes with the following roles:
* Controller
* Controller
* Controller
* Compute-vmware
4. Configure interfaces on nodes.
5. Enable plugin and configure network settings.
6. Configure VMware vCenter Settings. Add 1 vSphere cluster and
configure Nova Compute instance on compute-vmware.
7. Deploy cluster.
8. Run OSTF.
9. Launch vcenter vm.
10. Add node with compute-vmware role.
11. Reconfigure vcenter compute clusters.
12. Redeploy cluster.
13. Check that instance is in place.
14. Run OSTF.
15. Remove node with compute-vmware role.
16. Reconfigure vcenter compute clusters.
17. Redeploy cluster.
18. Run OSTF.
Duration: 240 min
"""
# Install NSX-T plugin to Fuel Master node with 5 slaves
self.show_step(1)
self.env.revert_snapshot('ready_with_5_slaves')
self.install_nsxt_plugin()
self.show_step(2) # Create new environment
cluster_id = self.fuel_web.create_cluster(
name=self.__class__.__name__,
mode=DEPLOYMENT_MODE,
settings=self.default.cluster_settings,
configure_ssl=False)
self.show_step(3) # Add nodes
self.fuel_web.update_nodes(cluster_id,
{'slave-01': ['controller'],
'slave-02': ['controller'],
'slave-03': ['controller'],
'slave-04': ['compute-vmware']})
self.show_step(4) # Configure interfaces on nodes
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(5) # Enable plugin and configure network settings
self.enable_plugin(cluster_id)
# Configure VMware settings. 1 cluster, 1 Nova Compute: compute-vmware
self.show_step(6)
target_node1 = self.fuel_web.get_nailgun_node_by_name('slave-04')
self.fuel_web.vcenter_configure(cluster_id,
target_node_1=target_node1['hostname'])
self.show_step(7) # Deploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(8) # Run OSTF
self.fuel_web.run_ostf(cluster_id)
self.show_step(9) # Launch vcenter vm
os_conn = os_actions.OpenStackActions(
self.fuel_web.get_public_vip(cluster_id),
SERVTEST_USERNAME,
SERVTEST_PASSWORD,
SERVTEST_TENANT)
os_help.create_instance(os_conn, az='vcenter')
self.show_step(10) # Add node with compute-vmware role
self.fuel_web.update_nodes(cluster_id,
{'slave-05': ['compute-vmware']})
self.reconfigure_cluster_interfaces(cluster_id)
self.show_step(11) # Reconfigure vcenter compute clusters
target_node2 = self.fuel_web.get_nailgun_node_by_name('slave-05')
self.fuel_web.vcenter_configure(cluster_id,
target_node_1=target_node1['hostname'],
target_node_2=target_node2['hostname'],
multiclusters=True)
self.show_step(12) # Redeploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(13) # Check that instance is in place
os_help.check_instances_state(os_conn)
self.show_step(14) # Run OSTF
self.fuel_web.run_ostf(cluster_id)
self.show_step(15) # Remove node with compute-vmware role
self.fuel_web.update_nodes(cluster_id,
{'slave-04': ['compute-vmware']},
False, True)
self.show_step(16) # Reconfigure vcenter compute clusters
vmware_attr = \
self.fuel_web.client.get_cluster_vmware_attributes(cluster_id)
vcenter_data = vmware_attr['editable']['value']['availability_zones'][
0]["nova_computes"]
comp_vmware_nodes = self.fuel_web.get_nailgun_cluster_nodes_by_roles(
cluster_id, ['compute-vmware'])
comp_vmware_nodes = [node for node in comp_vmware_nodes if
node['pending_deletion'] is True]
for node, nova_comp in itertools.product(comp_vmware_nodes,
vcenter_data):
if node['hostname'] == nova_comp['target_node']['current']['id']:
vcenter_data.remove(nova_comp)
self.fuel_web.client.update_cluster_vmware_attributes(cluster_id,
vmware_attr)
self.show_step(17) # Redeploy cluster
self.fuel_web.deploy_cluster_wait(cluster_id)
self.show_step(18) # Run OSTF
self.fuel_web.run_ostf(cluster_id)

File diff suppressed because it is too large Load Diff

View File

@ -1,649 +0,0 @@
#!/bin/bash
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
INVALIDOPTS_ERR=100
NOJOBNAME_ERR=101
NOISOPATH_ERR=102
NOTASKNAME_ERR=103
NOWORKSPACE_ERR=104
NOISOFOUND_ERR=107
CDWORKSPACE_ERR=110
ISODOWNLOAD_ERR=111
INVALIDTASK_ERR=112
# Defaults
export REBOOT_TIMEOUT=${REBOOT_TIMEOUT:-5000}
export ALWAYS_CREATE_DIAGNOSTIC_SNAPSHOT=${ALWAYS_CREATE_DIAGNOSTIC_SNAPSHOT:-true}
ShowHelp() {
cat << EOF
System Tests Script
It can perform several actions depending on Jenkins JOB_NAME it's ran from
or it can take names from exported environment variables or command line options
if you do need to override them.
-w (dir) - Path to workspace where fuelweb git repository was checked out.
Uses Jenkins' WORKSPACE if not set
-e (name) - Directly specify environment name used in tests
Uses ENV_NAME variable is set.
-j (name) - Name of this job. Determines ISO name, Task name and used by tests.
Uses Jenkins' JOB_NAME if not set
-v - Do not use virtual environment
-V (dir) - Path to python virtual environment
-i (file) - Full path to ISO file to build or use for tests.
Made from iso dir and name if not set.
-t (name) - Name of task this script should perform. Should be one of defined ones.
Taken from Jenkins' job's suffix if not set.
-o (str) - Allows you any extra command line option to run test job if you
want to use some parameters.
-a (str) - Allows you to path NOSE_ATTR to the test job if you want
to use some parameters.
-A (str) - Allows you to path NOSE_EVAL_ATTR if you want to enter attributes
as python expressions.
-m (name) - Use this mirror to build ISO from.
Uses 'srt' if not set.
-U - ISO URL for tests.
Null by default.
-r (yes/no) - Should built ISO file be placed with build number tag and
symlinked to the last build or just copied over the last file.
-b (num) - Allows you to override Jenkins' build number if you need to.
-l (dir) - Path to logs directory. Can be set by LOGS_DIR evironment variable.
Uses WORKSPACE/logs if not set.
-d - Dry run mode. Only show what would be done and do nothing.
Useful for debugging.
-k - Keep previously created test environment before tests run
-K - Keep test environment after tests are finished
-h - Show this help page
Most variables uses guesses from Jenkins' job name but can be overriden
by exported variable before script is run or by one of command line options.
You can override following variables using export VARNAME="value" before running this script
WORKSPACE - path to directory where Fuelweb repository was checked out by Jenkins or manually
JOB_NAME - name of Jenkins job that determines which task should be done and ISO file name.
If task name is "iso" it will make iso file
Other defined names will run Nose tests using previously built ISO file.
ISO file name is taken from job name prefix
Task name is taken from job name suffix
Separator is one dot '.'
For example if JOB_NAME is:
mytest.somestring.iso
ISO name: mytest.iso
Task name: iso
If ran with such JOB_NAME iso file with name mytest.iso will be created
If JOB_NAME is:
mytest.somestring.node
ISO name: mytest.iso
Task name: node
If script was run with this JOB_NAME node tests will be using ISO file mytest.iso.
First you should run mytest.somestring.iso job to create mytest.iso.
Then you can ran mytest.somestring.node job to start tests using mytest.iso and other tests too.
EOF
}
GlobalVariables() {
# where built iso's should be placed
# use hardcoded default if not set before by export
ISO_DIR="${ISO_DIR:=/var/www/fuelweb-iso}"
# name of iso file
# taken from jenkins job prefix
# if not set before by variable export
if [ -z "${ISO_NAME}" ]; then
ISO_NAME="${JOB_NAME%.*}.iso"
fi
# full path where iso file should be placed
# make from iso name and path to iso shared directory
# if was not overriden by options or export
if [ -z "${ISO_PATH}" ]; then
ISO_PATH="${ISO_DIR}/${ISO_NAME}"
fi
# what task should be ran
# it's taken from jenkins job name suffix if not set by options
if [ -z "${TASK_NAME}" ]; then
TASK_NAME="${JOB_NAME##*.}"
fi
# do we want to keep iso's for each build or just copy over single file
ROTATE_ISO="${ROTATE_ISO:=yes}"
# choose mirror to build iso from. Default is 'srt' for Saratov's mirror
# you can change mirror by exporting USE_MIRROR variable before running this script
USE_MIRROR="${USE_MIRROR:=srt}"
# only show what commands would be executed but do nothing
# this feature is useful if you want to debug this script's behaviour
DRY_RUN="${DRY_RUN:=no}"
VENV="${VENV:=yes}"
}
GetoptsVariables() {
while getopts ":w:j:i:t:o:a:A:m:U:r:b:V:l:dkKe:v:h" opt; do
case $opt in
w)
WORKSPACE="${OPTARG}"
;;
j)
JOB_NAME="${OPTARG}"
;;
i)
ISO_PATH="${OPTARG}"
;;
t)
TASK_NAME="${OPTARG}"
;;
o)
TEST_OPTIONS="${TEST_OPTIONS} ${OPTARG}"
;;
a)
NOSE_ATTR="${OPTARG}"
;;
A)
NOSE_EVAL_ATTR="${OPTARG}"
;;
m)
USE_MIRROR="${OPTARG}"
;;
U)
ISO_URL="${OPTARG}"
;;
r)
ROTATE_ISO="${OPTARG}"
;;
V)
VENV_PATH="${OPTARG}"
;;
l)
LOGS_DIR="${OPTARG}"
;;
k)
KEEP_BEFORE="yes"
;;
K)
KEEP_AFTER="yes"
;;
e)
ENV_NAME="${OPTARG}"
;;
d)
DRY_RUN="yes"
;;
v)
VENV="no"
;;
h)
ShowHelp
exit 0
;;
\?)
echo "Invalid option: -$OPTARG"
ShowHelp
exit $INVALIDOPTS_ERR
;;
:)
echo "Option -$OPTARG requires an argument."
ShowHelp
exit $INVALIDOPTS_ERR
;;
esac
done
}
CheckVariables() {
if [ -z "${JOB_NAME}" ]; then
echo "Error! JOB_NAME is not set!"
exit $NOJOBNAME_ERR
fi
if [ -z "${ISO_PATH}" ]; then
echo "Error! ISO_PATH is not set!"
exit $NOISOPATH_ERR
fi
if [ -z "${TASK_NAME}" ]; then
echo "Error! TASK_NAME is not set!"
exit $NOTASKNAME_ERR
fi
if [ -z "${WORKSPACE}" ]; then
echo "Error! WORKSPACE is not set!"
exit $NOWORKSPACE_ERR
fi
if [ -z "${POOL_PUBLIC}" ]; then
export POOL_PUBLIC='172.16.0.0/24:24'
fi
if [ -z "${POOL_MANAGEMENT}" ]; then
export POOL_MANAGEMENT='172.16.1.0/24:24'
fi
if [ -z "${POOL_PRIVATE}" ]; then
export POOL_PRIVATE='192.168.0.0/24:24'
fi
# vCenter variables
if [ -z "${DISABLE_SSL}" ]; then
export DISABLE_SSL="true"
fi
if [ -z "${VCENTER_USE}" ]; then
export VCENTER_USE="true"
fi
if [ -z "${VCENTER_IP}" ]; then
export VCENTER_IP="172.16.0.254"
fi
if [ -z "${VCENTER_USERNAME}" ]; then
export VCENTER_USERNAME="administrator@vsphere.local"
fi
if [ -z "${VCENTER_PASSWORD}" ]; then
echo "Error! VCENTER_PASSWORD is not set!"
exit 1
fi
if [ -z "${VC_DATACENTER}" ]; then
export VC_DATACENTER="Datacenter"
fi
if [ -z "${VC_DATASTORE}" ]; then
export VC_DATASTORE="nfs"
fi
if [ -z "${VCENTER_IMAGE_DIR}" ]; then
export VCENTER_IMAGE_DIR="/openstack_glance"
fi
if [ -z "${WORKSTATION_NODES}" ]; then
export WORKSTATION_NODES="esxi1 esxi2 esxi3 vcenter trusty nsx-edge"
fi
if [ -z "${WORKSTATION_IFS}" ]; then
export WORKSTATION_IFS="vmnet1 vmnet2 vmnet5"
fi
if [ -z "${VCENTER_CLUSTERS}" ]; then
export VCENTER_CLUSTERS="Cluster1,Cluster2"
fi
if [ -z "${WORKSTATION_SNAPSHOT}" ]; then
echo "Error! WORKSTATION_SNAPSHOT is not set!"
exit 1
fi
if [ -z "${WORKSTATION_USERNAME}" ]; then
echo "Error! WORKSTATION_USERNAME is not set!"
exit 1
fi
if [ -z "${WORKSTATION_PASSWORD}" ]; then
echo "Error! WORKSTATION_PASSWORD is not set!"
exit 1
fi
# NSXt variables
if [ -z "${NSXT_PLUGIN_PATH}" ]; then
echo "Error! NSXT_PLUGIN_PATH is not set!"
exit 1
fi
if [ -z "${NEUTRON_SEGMENT_TYPE}" ]; then
export NEUTRON_SEGMENT_TYPE="tun"
fi
if [ -z "${NSXT_INSECURE}" ]; then
export NSXT_INSECURE='true'
fi
if [ -z "${NSXT_MANAGERS_IP}" ]; then
export NSXT_MANAGERS_IP="172.16.0.249"
fi
if [ -z "${NSXT_USER}" ]; then
export NSXT_USER='admin'
fi
if [ -z "${NSXT_PASSWORD}" ]; then
echo "Error! NSXT_PASSWORD is not set!"
exit 1
fi
if [ -z "${NSXT_OVERLAY_TZ_UUID}" ]; then
export NSXT_OVERLAY_TZ_UUID='0eeb1b85-c826-403d-8762-6a9c23a4f132'
fi
if [ -z "${NSXT_VLAN_TZ_UUID}" ]; then
export NSXT_VLAN_TZ_UUID='8efe20d2-e71a-4d6e-acdd-f78a2ec2e90c'
fi
if [ -z "${NSXT_TIER0_ROUTER_UUID}" ]; then
export NSXT_TIER0_ROUTER_UUID='606acd01-c5f8-40ea-ae20-9a91eb7ebcb4'
fi
if [ -z "${NSXT_EDGE_CLUSTER_UUID}" ]; then
export NSXT_EDGE_CLUSTER_UUID='c53d602a-4010-47cc-a8b1-4ef11d0a3edd'
fi
if [ -z "${NSXT_UPLINK_PROFILE_UUID}" ]; then
export NSXT_UPLINK_PROFILE_UUID='99864272-b34f-46a5-89c8-5657fa7042ea'
fi
if [ -z "${NSXT_CONTROLLER_IP_POOL_UUID}" ]; then
export NSXT_CONTROLLER_IP_POOL_UUID='2e06fcb2-7c5b-4515-a7a9-98809c7b863a'
fi
if [ -z "${NSXT_CONTROLLER_PNICS_PAIRS}" ]; then
export NSXT_CONTROLLER_PNICS_PAIRS='enp0s6:uplink'
fi
if [ -z "${NSXT_COMPUTE_IP_POOL_UUID}" ]; then
export NSXT_COMPUTE_IP_POOL_UUID='2e06fcb2-7c5b-4515-a7a9-98809c7b863a'
fi
if [ -z "${NSXT_COMPUTE_PNICS_PAIRS}" ]; then
export NSXT_COMPUTE_PNICS_PAIRS='enp0s6:uplink'
fi
if [ -z "${NSXT_FLOATING_IP_RANGE}" ]; then
export NSXT_FLOATING_IP_RANGE='172.16.212.2-172.16.212.40'
fi
if [ -z "${NSXT_FLOATING_NET_CIDR}" ]; then
export NSXT_FLOATING_NET_CIDR='172.16.212.0/24'
fi
if [ -z "${NSXT_ROUTING_NET_CIDR}" ]; then
export NSXT_ROUTING_NET_CIDR='172.16.214.0/30'
fi
if [ -z "${NSXT_FLOATING_NET_GW}" ]; then
export NSXT_FLOATING_NET_GW='172.16.212.1'
fi
if [ -z "${NSXT_INTERNAL_NET_CIDR}" ]; then
export NSXT_INTERNAL_NET_CIDR='192.168.251.0/24'
fi
if [ -z "${NSXT_INTERNAL_NET_DNS}" ]; then
export NSXT_INTERNAL_NET_DNS='8.8.8.8'
fi
if [ ! -f "${DEVOPS_SETTINGS_TEMPLATE}" ]; then
if [ -z "${NODE_VOLUME_SIZE}" ]; then
export NODE_VOLUME_SIZE=350
fi
if [ -z "${ADMIN_NODE_MEMORY}" ]; then
export ADMIN_NODE_MEMORY=4096
fi
if [ -z "${ADMIN_NODE_CPU}" ]; then
export ADMIN_NODE_CPU=4
fi
if [ -z "${SLAVE_NODE_MEMORY}" ]; then
export SLAVE_NODE_MEMORY=4096
fi
if [ -z "${SLAVE_NODE_CPU}" ]; then
export SLAVE_NODE_CPU=4
fi
fi
}
CdWorkSpace() {
# chdir into workspace or fail if could not
if [ "${DRY_RUN}" != "yes" ]; then
cd "${WORKSPACE}"
ec=$?
if [ "${ec}" -gt "0" ]; then
echo "Error! Cannot cd to WORKSPACE!"
exit $CDWORKSPACE_ERR
fi
else
echo cd "${WORKSPACE}"
fi
}
RunTest() {
# Run test selected by task name
# check if iso file exists
if [ ! -f "${ISO_PATH}" ]; then
if [ -z "${ISO_URL}" -a "${DRY_RUN}" != "yes" ]; then
echo "Error! File ${ISO_PATH} not found and no ISO_URL (-U key) for downloading!"
exit $NOISOFOUND_ERR
else
if [ "${DRY_RUN}" = "yes" ]; then
echo wget -c ${ISO_URL} -O ${ISO_PATH}
else
echo "No ${ISO_PATH} found. Trying to download file."
wget -c ${ISO_URL} -O ${ISO_PATH}
rc=$?
if [ $rc -ne 0 ]; then
echo "Failed to fetch ISO from ${ISO_URL}"
exit $ISODOWNLOAD_ERR
fi
fi
fi
fi
if [ -z "${VENV_PATH}" ]; then
VENV_PATH="/home/jenkins/venv-nailgun-tests"
fi
# run python virtualenv
if [ "${VENV}" = "yes" ]; then
. $VENV_PATH/bin/activate
fi
if [ "${ENV_NAME}" = "" ]; then
ENV_NAME="${JOB_NAME}_system_test"
fi
if [ "${LOGS_DIR}" = "" ]; then
LOGS_DIR="${WORKSPACE}/logs"
fi
if [ ! -f "$LOGS_DIR" ]; then
mkdir -p $LOGS_DIR
fi
export ENV_NAME
export LOGS_DIR
export ISO_PATH
if [ "${KEEP_BEFORE}" != "yes" ]; then
if dos.py list | grep -q "^${ENV_NAME}\$" ; then
dos.py erase "${ENV_NAME}"
fi
fi
# gather additional option for this nose test run
OPTS=""
if [ -n "${NOSE_ATTR}" ]; then
OPTS="${OPTS} -a ${NOSE_ATTR}"
fi
if [ -n "${NOSE_EVAL_ATTR}" ]; then
OPTS="${OPTS} -A ${NOSE_EVAL_ATTR}"
fi
if [ -n "${TEST_OPTIONS}" ]; then
OPTS="${OPTS} ${TEST_OPTIONS}"
fi
clean_old_bridges
export PLUGIN_WORKSPACE="${WORKSPACE/\/fuel-qa}/plugin_test"
export WORKSPACE="${PLUGIN_WORKSPACE}/fuel-qa"
export PYTHONPATH="${PYTHONPATH:+${PYTHONPATH}:}${WORKSPACE}:${PLUGIN_WORKSPACE}"
[[ "${DEBUG}" == "true" ]] && echo "PYTHONPATH:${PYTHONPATH} PATH${PATH}"
[[ "${DEBUG}" == "true" ]] && echo "PLUGIN_WORKSPACE:${PLUGIN_WORKSPACE}"
[[ "${DEBUG}" == "true" ]] && echo "WORKSPACE:${WORKSPACE}"
python $PLUGIN_WORKSPACE/run_tests.py -q --nologcapture --with-xunit ${OPTS} &
SYSTEST_PID=$!
if ! ps -p $SYSTEST_PID > /dev/null
then
echo System tests exited prematurely, aborting
exit 1
fi
while [ "$(virsh net-list | grep -c $ENV_NAME)" -ne 5 ]; do sleep 10
if ! ps -p $SYSTEST_PID > /dev/null
then
echo System tests exited prematurely, aborting
exit 1
fi
done
sleep 10
# Configre vcenter nodes and interfaces
setup_net $ENV_NAME
clean_iptables
setup_stt $ENV_NAME
setup_external_net
revert_ws "$WORKSTATION_NODES" || { echo "killing $SYSTEST_PID and its childs" && pkill --parent $SYSTEST_PID && kill $SYSTEST_PID && exit 1; }
echo waiting for system tests to finish
wait $SYSTEST_PID
export RES=$?
echo ENVIRONMENT NAME is $ENV_NAME
virsh net-dumpxml ${ENV_NAME}_admin | grep -P "(\d+\.){3}" -o | awk '{print "Fuel master node IP: "$0"2"}'
if [ "${KEEP_AFTER}" != "yes" ]; then
# remove environment after tests
if [ "${DRY_RUN}" = "yes" ]; then
echo dos.py destroy "${ENV_NAME}"
else
dos.py destroy "${ENV_NAME}"
fi
fi
exit "${RES}"
}
RouteTasks() {
# this selector defines task names that are recognised by this script
# and runs corresponding jobs for them
# running any jobs should exit this script
case "${TASK_NAME}" in
test)
RunTest
;;
*)
echo "Unknown task: ${TASK_NAME}!"
exit $INVALIDTASK_ERR
;;
esac
exit 0
}
add_interface_to_bridge() {
env=$1
net_name=$2
nic=$3
ip=$4
for net in $(virsh net-list |grep ${env}_${net_name} |awk '{print $1}');do
bridge=$(virsh net-info $net |grep -i bridge |awk '{print $2}')
setup_bridge $bridge $nic $ip && echo $net_name bridge $bridge ready
done
}
setup_bridge() {
bridge=$1
nic=$2
ip=$3
sudo /sbin/brctl stp $bridge off
sudo /sbin/brctl addif $bridge $nic
# set if with existing ip down
for itf in $(sudo ip -o addr show to $ip | cut -d' ' -f2); do
echo deleting $ip from $itf
sudo ip addr del dev $itf $ip
done
echo adding $ip to $bridge
sudo /sbin/ip addr add $ip dev $bridge
echo $nic added to $bridge
sudo /sbin/ip link set dev $bridge up
if sudo /sbin/iptables-save |grep $bridge | grep -i reject| grep -q FORWARD; then
sudo /sbin/iptables -D FORWARD -o $bridge -j REJECT --reject-with icmp-port-unreachable
sudo /sbin/iptables -D FORWARD -i $bridge -j REJECT --reject-with icmp-port-unreachable
fi
}
clean_old_bridges() {
for intf in $WORKSTATION_IFS; do
for br in $(/sbin/brctl show | grep -v "bridge name" | cut -f1 -d' '); do
/sbin/brctl show $br| grep -q $intf && sudo /sbin/brctl delif $br $intf \
&& sudo /sbin/ip link set dev $br down && echo $intf deleted from $br
done
done
}
clean_iptables() {
sudo /sbin/iptables -F
sudo /sbin/iptables -t nat -F
sudo /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
}
revert_ws() {
cmd="vmrun -T ws-shared -h https://localhost:443/sdk -u $WORKSTATION_USERNAME -p $WORKSTATION_PASSWORD"
for i in $1; do
$cmd listRegisteredVM | grep -q $i || { echo "VM $i does not exist"; continue; }
echo vmrun: reverting $i to $WORKSTATION_SNAPSHOT
$cmd revertToSnapshot "[standard] $i/$i.vmx" $WORKSTATION_SNAPSHOT || { echo "Error: revert of $i failed"; return 1; }
done
for i in $1; do
echo vmrun: starting $i
$cmd start "[standard] $i/$i.vmx" || { echo "Error: $i failed to start"; return 1; }
done
}
setup_net() {
env=$1
add_interface_to_bridge $env public vmnet1 172.16.0.1/24
}
setup_stt() {
set -e
env=$1
net_name='private'
nic='vmnet2'
for net in $(virsh net-list |grep ${env}_${net_name} | awk '{print $1}');do
bridge=$(virsh net-info $net | grep -i bridge | awk '{print $2}')
done
sudo /sbin/brctl stp $bridge off
sudo /sbin/brctl addif $bridge $nic
echo $nic added to $bridge
sudo /sbin/ip link set dev $bridge up
if sudo /sbin/iptables-save | grep $bridge | grep -i reject| grep -q FORWARD; then
sudo /sbin/iptables -D FORWARD -o $bridge -j REJECT --reject-with icmp-port-unreachable
sudo /sbin/iptables -D FORWARD -i $bridge -j REJECT --reject-with icmp-port-unreachable
fi
echo "Stt added to $net_name bridge $bridge"
}
setup_external_net() {
nic='vmnet5'
ip=${NSXT_ROUTING_NET_CIDR%\.*}.1
gw_ip=${NSXT_ROUTING_NET_CIDR%\.*}.2
mask=${NSXT_ROUTING_NET_CIDR##*\/}
#set if with existing ip down
for itf in $(sudo ip -o addr show to $ip | cut -d' ' -f2); do
echo deleting $ip from $itf
sudo ip addr del $ip/$mask dev $itf
done
for itf in $(sudo ip -o ro show to ${NSXT_FLOATING_NET_CIDR} | cut -d' ' -f5); do
echo deleting route to ${NSXT_FLOATING_NET_CIDR} dev $itf
sudo ip ro del ${NSXT_FLOATING_NET_CIDR} dev $itf
done
set -e
sudo /sbin/ip addr add ${ip}/${mask} dev $nic
sudo /sbin/ip ro add ${NSXT_FLOATING_NET_CIDR} via ${gw_ip}
echo "Routing net added to $nic"
}
# MAIN
# first we want to get variable from command line options
GetoptsVariables "${@}"
# then we define global variables and there defaults when needed
GlobalVariables
# check do we have all critical variables set
CheckVariables
# first we chdir into our working directory unless we dry run
CdWorkSpace
# finally we can choose what to do according to TASK_NAME
RouteTasks

View File

@ -1,29 +0,0 @@
#!/bin/bash
# Add here any the actions which are required before plugin build
# like packages building, packages downloading from mirrors and so on.
# The script should return 0 if there were no errors.
set -eux
ROOT="$(dirname $(readlink -f $0))"
PLUGIN_MOD_DIR="$ROOT/deployment_scripts/puppet/modules/upstream"
MODULE_NAME='nsxt'
# Download upstream puppet modules that are not in fuel-library/
find "$ROOT/deployment_scripts/puppet/modules" -maxdepth 1 -mindepth 1 -type d ! -name $MODULE_NAME -prune -exec rm -fr {} \;
"$ROOT"/update_modules.sh -d "$PLUGIN_MOD_DIR"
# Remove .git directory
rm -fr $(find "${PLUGIN_MOD_DIR:?}" -name '.git' )
mv "$PLUGIN_MOD_DIR"/* "$(dirname $PLUGIN_MOD_DIR)"
# Download puppet modules that are in fuel-library/
TARBALL_VERSION='stable/mitaka'
REPO_PATH="https://github.com/openstack/fuel-library/tarball/${TARBALL_VERSION}"
#
wget -qO- "$REPO_PATH" | tar --wildcards -C "$PLUGIN_MOD_DIR" --strip-components=3 -zxvf - "openstack-fuel-library-*/deployment/puppet/"
mv "$PLUGIN_MOD_DIR"/osnailyfacter/lib/puppet/parser/functions/get_ssl_property.rb "$(dirname $PLUGIN_MOD_DIR)"/$MODULE_NAME/lib/puppet/parser/functions
# clean
rm -fr "$PLUGIN_MOD_DIR"

View File

@ -1,169 +0,0 @@
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
BUILDDIR = _build
# User-friendly check for sphinx-build
ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1)
$(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/)
endif
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
# the i18n builder cannot share the environment and doctrees with the others
I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " dirhtml to make HTML files named index.html in directories"
@echo " singlehtml to make a single large HTML file"
@echo " pickle to make pickle files"
@echo " json to make JSON files"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " qthelp to make HTML files and a qthelp project"
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx"
@echo " text to make text files"
@echo " man to make manual pages"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " gettext to make PO message catalogs"
@echo " changes to make an overview of all changed/added/deprecated items"
@echo " xml to make Docutils-native XML files"
@echo " pseudoxml to make pseudoxml-XML files for display purposes"
@echo " linkcheck to check all external links for integrity"
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
clean:
rm -rf $(BUILDDIR)/*
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
dirhtml:
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
@echo
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
singlehtml:
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
@echo
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
pickle:
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
@echo
@echo "Build finished; now you can process the pickle files."
json:
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
@echo
@echo "Build finished; now you can process the JSON files."
htmlhelp:
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in $(BUILDDIR)/htmlhelp."
devhelp:
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
@echo
@echo "Build finished."
@echo "To view the help file:"
@echo "# mkdir -p $$HOME/.local/share/devhelp/FuelNSXTplugin"
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/FuelNSXTplugin"
@echo "# devhelp"
epub:
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
@echo
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
latex:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
@echo "Run \`make' in that directory to run these through (pdf)latex" \
"(use \`make latexpdf' here to do that automatically)."
latexpdf:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through pdflatex..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
latexpdfja:
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
@echo "Running LaTeX files through platex and dvipdfmx..."
$(MAKE) -C $(BUILDDIR)/latex all-pdf-ja
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
text:
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
@echo
@echo "Build finished. The text files are in $(BUILDDIR)/text."
man:
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
@echo
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
texinfo:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."
gettext:
$(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale
@echo
@echo "Build finished. The message catalogs are in $(BUILDDIR)/locale."
changes:
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
@echo
@echo "The overview file is in $(BUILDDIR)/changes."
linkcheck:
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in $(BUILDDIR)/linkcheck/output.txt."
doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
xml:
$(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml
@echo
@echo "Build finished. The XML files are in $(BUILDDIR)/xml."
pseudoxml:
$(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml
@echo
@echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml."

View File

@ -1,11 +0,0 @@
Fuel NSX-T plugin specification
===============================
If you want to build HTML variant of plugin specification first install
necessary sphinx requirements:
# pip install -r requirements.txt
Then you can build specification with `make' tool:
# make html

View File

@ -1,256 +0,0 @@
"""Copyright 2016 Mirantis, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this file except in compliance with the License. You may obtain
copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations
under the License.
"""
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [ 'oslosphinx' ]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Fuel NSX-T plugin'
copyright = u'2016, Mirantis'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0.0'
# The full version, including alpha/beta/rc tags.
release = '1.0.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'FuelNSXTplugindoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
# 'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'FuelNSXTplugin.tex', u'Fuel NSX-T plugin specification',
u'Mirantis Inc.', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'fuelnsxtplugin', u'Fuel NSX-T plugin specification',
[u'Igor Zinovik'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'FuelNSXTplugin', u'Fuel NSX-T plugin Documentation',
u'Igor Zinovik', 'FuelNSXTplugin', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False

View File

@ -1,11 +0,0 @@
.. Fuel NSX-T plugin documentation master file
Fuel NSX-T plugin's specification
=================================
Contents:
.. toctree::
:maxdepth: 2
nsx-t-1.0.0-spec

View File

@ -1,169 +0,0 @@
..
This work is licensed under a Creative Commons Attribution 3.0 Unported
License.
http://creativecommons.org/licenses/by/3.0/legalcode
========================
Fuel NSX-T plugin v1.0.0
========================
NSX-T plugin for Fuel provides an ability to deploy OpenStack cluster that uses
NSX Transformers SDN platform as backend for Neutron server.
Proposed change
===============
Requirements
------------
- the plugin must be compatible with Fuel 9.0
- NSX Transformers platform is correctly configured and is running before
plugin starts deployment process
- the plugin is not hot pluggable, i.e. it cannot be added later after
OpenStack cluster was deployed
- overlay (STT) traffic must reside in Fuel Private network
NSX Transformers platform provides NSX agents that provide STT support for
OpenStack nodes (controller, compute). It also supports ESXi hypervisor.
Plugin component
----------------
Plugin reuses ``network:neutron:core:nsx`` component (component-registry [1]_)
that is also declared by Fuel NSXv plugin [2]_. This means that these two
plugins cannot be installed together on fuel master node. This is a limitation
of Fuel UI [3]_.
Incompatible roles
------------------
Plugin is not compatible with following roles:
* Ironic
NSX Transformers packages for Linux
-----------------------------------
Linux packages are provided together with NSX distribution. It is not possible
to distribute them inside plugin package, due to license requirements. Operator
will be required to upload them into plugin's directory, only after that it
will be possible to start cluster deployment.
During deployment packages will be copied to node's local disk, local
repository with highest priority will be formed and packages will be pinned.
Plugin deployment workflow
--------------------------
#. Install OVS package provided with NSX Transformers distribution
#. Install dependencies for OVS and NSX-T packages
#. Install NSX plugin on controller
#. Add node to NSX fabric (aka management plane)
#. Register node as transport node in NSX-T Manager (aka control plane)
#. Add permit rule for STT traffic
#. Stop and disable neutron-openvswitch-agent
#. Configure neutron-server to use NSX plugin (`nsx_v3`)
#. Configure neutron dhcp agent
#. Configure neutron metadata agent
#. Configure nova to NSX managed OVS bridge
Data model impact
-----------------
Plugin will produce following array of settings into astute.yaml:
.. code-block:: yaml
nsx:
nsx_api_managers:
value: 172.16.0.249
nsx_api_user:
value: admin
nsx_api_password:
value: r00tme
nsx_default_overlay_tz:
value: a1ed818c-3580-45ac-a1bc-8fd4bf045cff
nsx_default_vlan_tz:
value: 59919e1c-9689-4335-97cd-758d27204287
nsx_default_tier0_router:
value: 0785e4bc-10d0-4744-8088-9cb26b38f23f
Upgrade impact
--------------
None.
Other end user impact
---------------------
None.
Implementation
==============
Assignee(s)
-----------
Primary assignee:
- Igor Zinovik <izinovik@mirantis.com> - developer
Other contributors:
- Artem Savinov <asavinov@mirantis.com> - developer
Project manager:
- Andrian Noga <anoga@mirantis.com>
Quality assurance:
- Andrey Setyaev <asetyaev@mirantis.com>
Work items
==========
* Create pre-dev environment and manually deploy NSX Transformers
* Create Fuel plugin bundle, which contains deployments scripts, puppet
modules and metadata
* Implement puppet module
* Create system tests for the plugin
* Prepare user guide
Dependencies
============
* Fuel 9.0
* VMware NSX Transformers 1.0
Testing
=======
* Sanity checks including plugin build
* Syntax check
* Functional testing
* Non-functional testing
* Destructive testing
Documentation impact
====================
* User guide
* Test plan
References
==========
.. [1] Component registry specification https://github.com/openstack/fuel-specs/blob/master/specs/8.0/component-registry.rst
.. [2] Fuel NSXv plugin component https://github.com/openstack/fuel-plugin-nsxv/blob/master/components.yaml
.. [3] Fuel UI component binding https://github.com/openstack/fuel-ui/blob/stable/mitaka/static/views/wizard.js#L348

View File

@ -1,3 +0,0 @@
docutils==0.9.1
oslosphinx
sphinx>=1.1.2,!=1.2.0,<1.3

View File

@ -1,164 +0,0 @@
#!/bin/bash -e
###############################################################################
#
# Copyright 2015 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
###############################################################################
#
# update_modules.sh
#
# This script uses librarian-puppet-simple to populate the puppet folder with
# upstream puppet modules. By default, it assumes librarian-puppet-simple is
# already available to the environment or it will fail. You can provide command
# line options to have the script use bundler to install librarian-puppet-simple
# if neccessary.
#
# Parameters:
# -b - Use bundler to install librarian-puppet (optional)
# -r - Hard git reset of librarian managed modules back to specified version (optional)
# -p <puppet_version> - Puppet version to use with bundler (optional)
# -h <bundle_dir> - Folder to be used as the home directory for bundler (optional)
# -g <gem_home> - Folder to be used as the gem directory (optional)
# -u - Run librarian update (optional)
# -v - Verbose printing, turns on set -x (optional)
# -? - This usage information
#
# Variables:
# PUPPET_GEM_VERSION - the version of puppet to be pulled down by bundler
# Defaults to '3.4.3'
# BUNDLE_DIR - The folder to store the bundle gems in.
# Defaults to '/var/tmp/.bundle_home'
# GEM_HOME - The folder to store the gems in to not require root.
# Defaults to '/var/tmp/.gem_home'
#
# NOTE: These variables can be overriden via bash environment variable with the
# same name or via the command line paramters.
#
# Author: Alex Schultz <aschultz@mirantis.com>
#
###############################################################################
set -e
usage() {
cat <<EOF
Usage: $(basename $0) [-b] [-r] [-p <puppet_version>] [-h <bundle_dir>] [-g <gem_home>] [-u] [-?]
Options:
-b - Use bundler instead of assuming librarian-puppet is available
-r - Hard git reset of librarian managed modules back to specified version
-p <puppet_version> - Puppet version to use with bundler
-h <bundle_dir> - Folder to be used as the home directory for bundler
-g <gem_home> - Folder to be used as the gem directory
-u - Run librarian update
-v - Verbose printing of commands
-d - Patch where modules to install
-? - This usage information
EOF
exit 1
}
while getopts ":bp:g:h:vru:d:" opt; do
case $opt in
b)
USE_BUNDLER=true
BUNDLER_EXEC="bundle exec"
;;
p)
PUPPET_GEM_VERSION=$OPTARG
;;
h)
BUNDLE_DIR=$OPTARG
;;
g)
GEM_HOME=$OPTARG
;;
r)
RESET_HARD=true
;;
u)
UPDATE=true
;;
v)
VERBOSE='--verbose'
set -x
;;
d)
PLUGIN_MOD_DIR=$OPTARG
;;
\?)
usage
;;
:)
echo "Option -$OPTARG requires an argument." >&2
usage
;;
esac
done
shift "$((OPTIND-1))"
DEPLOYMENT_DIR=$(cd $(dirname $0) && pwd -P)
# Timeout in seconds for running puppet librarian
TIMEOUT=600
export PUPPET_GEM_VERSION=${PUPPET_GEM_VERSION:-'~>3.8'}
export BUNDLE_DIR=${BUNDLE_DIR:-'/var/tmp/.bundle_home'}
export GEM_HOME=${GEM_HOME:-'/var/tmp/.gem_home'}
# We need to be in the deployment directory to run librarian-puppet-simple
cd $DEPLOYMENT_DIR
if [ "$USE_BUNDLER" = true ]; then
# ensure bundler is installed
bundle --version
# update bundler modules
bundle update
fi
# if no timeout command, return true so we don't fail this script (LP#1510665)
TIMEOUT_CMD=$(type -P timeout || true)
if [ -n "$TIMEOUT_CMD" ]; then
TIMEOUT_CMD="$TIMEOUT_CMD $TIMEOUT"
fi
# Check to make sure if the folder already exists, it has a .git so we can
# use git on it. If the mod folder exists, but .git doesn't then remove the mod
# folder so it can be properly installed via librarian.
for MOD in $(grep "^mod" Puppetfile | tr -d '[:punct:]' | awk '{ print $2 }'); do
MOD_DIR="${PLUGIN_MOD_DIR}/${MOD}"
if [ -d $MOD_DIR ] && [ ! -d "${MOD_DIR}/.git" ];
then
rm -rf "${MOD_DIR}"
fi
done
# run librarian-puppet install to populate the modules if they do not already
# exist
$TIMEOUT_CMD $BUNDLER_EXEC librarian-puppet install $VERBOSE --path=${PLUGIN_MOD_DIR}
# run librarian-puppet update to ensure the modules are checked out to the
# correct version
if [ "$UPDATE" = true ]; then
$TIMEOUT_CMD $BUNDLER_EXEC librarian-puppet update $VERBOSE --path=${PLUGIN_MOD_DIR}
fi
# do a hard reset on the librarian managed modules LP#1489542
if [ "$RESET_HARD" = true ]; then
for MOD in $(grep "^mod " Puppetfile | tr -d '[:punct:]' | awk '{ print $2 }'); do
cd "${PLUGIN_MOD_DIR}/${MOD}"
git reset --hard
done
cd $DEPLOYMENT_DIR
fi