Refactor code of vmware dvs tests
* do code more readable * fix most of misprints in docs Change-Id: I56637bd15b9491cd891e1be95f48c4cd61f372ca
This commit is contained in:
parent
2eb83d06bd
commit
edbe780e60
|
@ -64,7 +64,7 @@ Steps
|
|||
#####
|
||||
|
||||
1. Install DVS plugin on master node.
|
||||
2. Create a new environment with following parameters:
|
||||
2. Create a new environment with the following parameters:
|
||||
* Compute: KVM/QEMU with vCenter
|
||||
* Networking: Neutron with VLAN segmentation
|
||||
* Storage: default
|
||||
|
@ -81,14 +81,12 @@ Steps
|
|||
7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers.
|
||||
8. Verify networks.
|
||||
9. Deploy cluster.
|
||||
10. Run OSTF
|
||||
10. Run OSTF.
|
||||
11. Launch instances in nova and vcenter availability zones.
|
||||
12. Verify connection between instances. Send ping.
|
||||
Check that ping get reply.
|
||||
12. Verify connection between instances: check that instances can ping each other.
|
||||
13. Shutdown controller with vmclusters.
|
||||
14. Check that vcenter-vmcluster migrates to another controller.
|
||||
15. Verify connection between instances.
|
||||
Send ping, check that ping get reply.
|
||||
15. Verify connection between instances: check that instances can ping each other.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -123,7 +121,7 @@ Steps
|
|||
#####
|
||||
|
||||
1. Install DVS plugin on master node.
|
||||
2. Create a new environment with following parameters:
|
||||
2. Create a new environment with the following parameters:
|
||||
* Compute: KVM/QEMU with vCenter
|
||||
* Networking: Neutron with VLAN segmentation
|
||||
* Storage: default
|
||||
|
@ -141,9 +139,9 @@ Steps
|
|||
8. Verify networks.
|
||||
9. Deploy cluster.
|
||||
10. Run OSTF.
|
||||
11. Launch instance VM_1 with image TestVM, availability zone nova and flavor m1.micro.
|
||||
12. Launch instance VM_2 with image TestVM-VMDK, availability zone vcenter and flavor m1.micro.
|
||||
13. Check connection between instances, send ping from VM_1 to VM_2 and vice verse.
|
||||
11. Launch instance VM_1 from image TestVM, with availability zone nova and flavor m1.micro.
|
||||
12. Launch instance VM_2 from image TestVM-VMDK, with availability zone vcenter and flavor m1.micro.
|
||||
13. Verify connection between instances: check that VM_1 and VM_2 can ping each other.
|
||||
14. Reboot vcenter.
|
||||
15. Check that controller lost connection with vCenter.
|
||||
16. Wait for vCenter.
|
||||
|
@ -202,10 +200,10 @@ Steps
|
|||
8. Verify networks.
|
||||
9. Deploy cluster.
|
||||
10. Run OSTF.
|
||||
11. Launch instance VM_1 with image TestVM, nova availability zone and flavor m1.micro.
|
||||
12. Launch instance VM_2 with image TestVM-VMDK, vcenter availability zone and flavor m1.micro.
|
||||
13. Check connection between instances, send ping from VM_1 to VM_2 and vice verse.
|
||||
14. Reboot vcenter.
|
||||
11. Launch instance VM_1 with image TestVM, nova availability zone and flavor m1.micro.
|
||||
12. Launch instance VM_2 with image TestVM-VMDK, vcenter availability zone and flavor m1.micro.
|
||||
13. Verify connection between instances: check that VM_1 and VM_2 can ping each other.
|
||||
14. Reboot vCenter.
|
||||
15. Check that ComputeVMware lost connection with vCenter.
|
||||
16. Wait for vCenter.
|
||||
17. Ensure connectivity between instances.
|
||||
|
@ -261,14 +259,12 @@ Steps
|
|||
7. Configure VMware vCenter Settings. Add 2 vSphere clusters and configure Nova Compute instances on controllers.
|
||||
8. Verify networks.
|
||||
9. Deploy cluster.
|
||||
10. Run OSTF
|
||||
10. Run OSTF.
|
||||
11. Launch instances in nova and vcenter availability zones.
|
||||
12. Verify connection between instances. Send ping.
|
||||
Check that ping get reply.
|
||||
13. Reset controller with vmclusters services.
|
||||
12. Verify connection between instances: check that instances can ping each other.
|
||||
13. Reset controller with vmclusters services.
|
||||
14. Check that vmclusters services migrate to another controller.
|
||||
15. Verify connection between instances.
|
||||
Send ping, check that ping get reply.
|
||||
15. Verify connection between instances: check that instances can ping each other.
|
||||
|
||||
|
||||
Expected result
|
||||
|
|
|
@ -105,15 +105,15 @@ Steps
|
|||
* Storage: default
|
||||
* Additional services: default
|
||||
3. Go to Network tab -> Other subtab and check DVS plugin section is displayed with all required GUI elements:
|
||||
'Neutron VMware DVS ML2 plugin' check box
|
||||
"Use the VMware DVS firewall driver" check box
|
||||
"Enter the cluster to dvSwitch mapping." text field with description 'List of ClusterName:SwitchName pairs, separated by semicolon. '
|
||||
'Neutron VMware DVS ML2 plugin' checkbox
|
||||
'Use the VMware DVS firewall driver' checkbox
|
||||
'Enter the cluster to dvSwitch mapping.' text field with description 'List of ClusterName:SwitchName pairs, separated by semicolon.'
|
||||
'Versions' radio button with <plugin version>
|
||||
4. Verify that check box "Neutron VMware DVS ML2 plugin" is enabled by default.
|
||||
5. Verify that user can disable -> enable the DVS plugin by clicking on the checkbox “Neutron VMware DVS ML2 plugin”
|
||||
6. Verify that check box "Use the VMware DVS firewall driver" is enabled by default.
|
||||
4. Verify that checkbox 'Neutron VMware DVS ML2 plugin' is enabled by default.
|
||||
5. Verify that user can disable/enable the DVS plugin by clicking on the checkbox 'Neutron VMware DVS ML2 plugin'.
|
||||
6. Verify that checkbox 'Use the VMware DVS firewall driver' is enabled by default.
|
||||
7. Verify that all labels of the DVS plugin section have the same font style and color.
|
||||
8. Verify that all elements of the DVS plugin section are vertically aligned
|
||||
8. Verify that all elements of the DVS plugin section are vertically aligned.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -184,7 +184,7 @@ dvs_vcenter_bvt
|
|||
Description
|
||||
###########
|
||||
|
||||
Check deployment with VMware DVS plugin, 3 Controllers, Compute, 2 CephOSD, CinderVMware and computeVMware roles.
|
||||
Check deployment with VMware DVS plugin, 3 Controllers, 3 Compute + CephOSD and CinderVMware + computeVMware roles.
|
||||
|
||||
|
||||
Complexity
|
||||
|
|
|
@ -15,7 +15,7 @@ dvs_vcenter_systest_setup
|
|||
Description
|
||||
###########
|
||||
|
||||
Deploy environment in DualHypervisors mode with 3 controllers, 2 compute-vmware and 1 compute nodes. Nova Compute instances are running on controller nodes.
|
||||
Deploy environment in DualHypervisors mode with 1 controller, 1 compute-vmware and 2 compute nodes. Nova Compute instances are running on controller nodes.
|
||||
|
||||
|
||||
Complexity
|
||||
|
@ -86,7 +86,7 @@ Steps
|
|||
5. Remove private network net_01.
|
||||
6. Check that network net_01 is not present in the vSphere.
|
||||
7. Add private network net_01.
|
||||
8. Check that networks is present in the vSphere.
|
||||
8. Check that networks is present in the vSphere.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -160,14 +160,14 @@ Steps
|
|||
|
||||
1. Set up for system tests.
|
||||
2. Log in to Horizon Dashboard.
|
||||
3. Navigate to Project -> Compute -> Instances
|
||||
3. Navigate to Project -> Compute -> Instances
|
||||
4. Launch instance VM_1 with image TestVM, availability zone nova and flavor m1.micro.
|
||||
5. Launch instance VM_2 with image TestVM-VMDK, availability zone vcenter and flavor m1.micro.
|
||||
6. Verify that instances communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa.
|
||||
5. Launch instance VM_2 with image TestVM-VMDK, availability zone vcenter and flavor m1.micro.
|
||||
6. Verify that instances communicate between each other: check that VM_1 and VM_2 can ping each other.
|
||||
7. Disable interface of VM_1.
|
||||
8. Verify that instances don't communicate between each other. Send icmp ping from VM_2 to VM_1 and vice versa.
|
||||
8. Verify that instances don't communicate between each other: check that VM_1 and VM_2 can not ping each other.
|
||||
9. Enable interface of VM_1.
|
||||
10. Verify that instances communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa.
|
||||
10. Verify that instances communicate between each other: check that VM_1 and VM_2 can ping each other.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -204,13 +204,13 @@ Steps
|
|||
1. Set up for system tests.
|
||||
2. Log in to Horizon Dashboard.
|
||||
3. Add two private networks (net01, and net02).
|
||||
4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24) to each network.
|
||||
4. Add one subnet (net01_subnet01: 192.168.101.0/24, net02_subnet01, 192.168.102.0/24) to each network.
|
||||
5. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone.
|
||||
6. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro vcenter availability zone.
|
||||
6. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro vcenter availability zone.
|
||||
7. Check abilities to assign multiple vNIC net01 and net02 to VM_1.
|
||||
8. Check abilities to assign multiple vNIC net01 and net02 to VM_2.
|
||||
9. Check that both interfaces on each instance have an IP address. To activate second interface on cirros edit the /etc/network/interfaces and restart network: "sudo /etc/init.d/S40network restart"
|
||||
10. Send icmp ping from VM_1 to VM_2 and vice versa.
|
||||
10. check that VM_1 and VM_2 can ping each other.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -219,8 +219,8 @@ Expected result
|
|||
VM_1 and VM_2 should be attached to multiple vNIC net01 and net02. Pings should get a response.
|
||||
|
||||
|
||||
Check connection between instances in one default tenant.
|
||||
----------------------------------------------------------
|
||||
Check connection between instances in one default tenant.
|
||||
---------------------------------------------------------
|
||||
|
||||
|
||||
ID
|
||||
|
@ -245,10 +245,10 @@ Steps
|
|||
#####
|
||||
|
||||
1. Set up for system tests.
|
||||
2. Navigate to Project -> Compute -> Instances
|
||||
2. Navigate to Project -> Compute -> Instances.
|
||||
3. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone.
|
||||
4. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro in vcenter availability zone.
|
||||
5. Verify that VM_1 and VM_2 on different hypervisors communicate between each other. Send icmp ping from VM_1 of vCenter to VM_2 from Qemu/KVM and vice versa.
|
||||
5. Verify that VM_1 and VM_2 on different hypervisors communicate between each other: check that instances can ping each other.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -285,10 +285,10 @@ Steps
|
|||
1. Set up for system tests.
|
||||
2. Log in to Horizon Dashboard.
|
||||
3. Create tenant net_01 with subnet.
|
||||
4. Navigate to Project -> Compute -> Instances
|
||||
4. Navigate to Project -> Compute -> Instances.
|
||||
5. Launch instance VM_1 with image TestVM and flavor m1.micro in nova availability zone in net_01
|
||||
6. Launch instance VM_2 with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in net_01
|
||||
7. Verify that instances on same tenants communicate between each other. Send icmp ping from VM_1 to VM_2 and vice versa.
|
||||
7. Verify that instances on same tenants communicate between each other. check that VM_1 and VM_2 can ping each other.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -297,7 +297,7 @@ Expected result
|
|||
Pings should get a response.
|
||||
|
||||
|
||||
Check connectivity between instances attached to different networks with and within a router between them.
|
||||
Check connectivity between instances attached to different networks with and without a router between them.
|
||||
----------------------------------------------------------------------------------------------------------
|
||||
|
||||
|
||||
|
@ -310,7 +310,7 @@ dvs_different_networks
|
|||
Description
|
||||
###########
|
||||
|
||||
Check connectivity between instances attached to different networks with and within a router between them.
|
||||
Check connectivity between instances attached to different networks with and without a router between them.
|
||||
|
||||
|
||||
Complexity
|
||||
|
@ -333,11 +333,11 @@ Steps
|
|||
9. Launch instances in the net02 with image TestVM and flavor m1.micro in nova az.
|
||||
10. Launch instances in the net02 with image TestVM-VMDK and flavor m1.micro in vcenter az.
|
||||
11. Verify that instances of same networks communicate between each other via private ip.
|
||||
Send icmp ping between instances.
|
||||
Check that instances can ping each other.
|
||||
12. Verify that instances of different networks don't communicate between each other via private ip.
|
||||
13. Delete net_02 from Router_02 and add it to the Router_01.
|
||||
14. Verify that instances of different networks communicate between each other via private ip.
|
||||
Send icmp ping between instances.
|
||||
Check that instances can ping each other.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -375,15 +375,15 @@ Steps
|
|||
2. Log in to Horizon Dashboard.
|
||||
3. Create non-admin tenant with name 'test_tenant': Identity -> Projects-> Create Project. On tab Project Members add admin with admin and member.
|
||||
4. Navigate to Project -> Network -> Networks
|
||||
5. Create network with subnet.
|
||||
6. Navigate to Project -> Compute -> Instances
|
||||
7. Launch instance VM_1 with image TestVM-VMDK in the vcenter availability zone.
|
||||
5. Create network with subnet.
|
||||
6. Navigate to Project -> Compute -> Instances
|
||||
7. Launch instance VM_1 with image TestVM-VMDK in the vcenter availability zone.
|
||||
8. Navigate to test_tenant.
|
||||
9. Navigate to Project -> Network -> Networks
|
||||
9. Navigate to Project -> Network -> Networks.
|
||||
10. Create Router, set gateway and add interface.
|
||||
11. Navigate to Project -> Compute -> Instances
|
||||
11. Navigate to Project -> Compute -> Instances
|
||||
12. Launch instance VM_2 with image TestVM-VMDK in the vcenter availability zone.
|
||||
13. Verify that instances on different tenants don't communicate between each other. Send icmp ping from VM_1 of admin tenant to VM_2 of test_tenant and vice versa.
|
||||
13. Verify that instances on different tenants don't communicate between each other. Check that instances can not ping each other.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -421,14 +421,14 @@ Steps
|
|||
2. Log in to Horizon Dashboard.
|
||||
3. Create net_01: net01_subnet, 192.168.112.0/24 and attach it to default router.
|
||||
4. Launch instance VM_1 of nova availability zone with image TestVM and flavor m1.micro in the default internal network.
|
||||
5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01.
|
||||
6. Send ping from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip.
|
||||
5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01.
|
||||
6. Send icmp request from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip and get related icmp reply.
|
||||
|
||||
|
||||
Expected result
|
||||
###############
|
||||
|
||||
Pings should get a response
|
||||
Pings should get a response
|
||||
|
||||
|
||||
Check connectivity instances to public network with floating ip.
|
||||
|
@ -460,8 +460,8 @@ Steps
|
|||
2. Log in to Horizon Dashboard.
|
||||
3. Create net01: net01__subnet, 192.168.112.0/24 and attach it to the default router.
|
||||
4. Launch instance VM_1 of nova availability zone with image TestVM and flavor m1.micro in the default internal network. Associate floating ip.
|
||||
5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01. Associate floating ip.
|
||||
6. Send ping from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip.
|
||||
5. Launch instance VM_2 of vcenter availability zone with image TestVM-VMDK and flavor m1.micro in the net_01. Associate floating ip.
|
||||
6. Send icmp request from instances VM_1 and VM_2 to 8.8.8.8 or other outside ip and get related icmp reply.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -497,29 +497,29 @@ Steps
|
|||
|
||||
1. Set up for system tests.
|
||||
2. Create non default network with subnet net_01.
|
||||
3. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the tenant network net_01
|
||||
4. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the internal tenant network.
|
||||
3. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the tenant network net_01
|
||||
4. Launch 2 instances of vcenter availability zone and 2 instances of nova availability zone in the internal tenant network.
|
||||
5. Attach net_01 to default router.
|
||||
6. Create security group SG_1 to allow ICMP traffic.
|
||||
7. Add Ingress rule for ICMP protocol to SG_1.
|
||||
8. Create security groups SG_2 to allow TCP traffic 22 port.
|
||||
9. Add Ingress rule for TCP protocol to SG_2.
|
||||
10. Remove default security group and attach SG_1 and SG_2 to VMs
|
||||
11. Check ping is available between instances.
|
||||
10. Remove default security group and attach SG_1 and SG_2 to VMs.
|
||||
11. Check that instances can ping each other.
|
||||
12. Check ssh connection is available between instances.
|
||||
13. Delete all rules from SG_1 and SG_2.
|
||||
14. Check that ssh aren't available to instances.
|
||||
14. Check that instances are not available via ssh.
|
||||
15. Add Ingress and egress rules for TCP protocol to SG_2.
|
||||
16. Check ssh connection is available between instances.
|
||||
17. Check ping is not available between instances.
|
||||
17. Check that instances can not ping each other.
|
||||
18. Add Ingress and egress rules for ICMP protocol to SG_1.
|
||||
19. Check ping is available between instances.
|
||||
19. Check that instances can ping each other.
|
||||
20. Delete Ingress rule for ICMP protocol from SG_1 (for OS cirros skip this step).
|
||||
21. Add Ingress rule for ICMP ipv6 to SG_1 (for OS cirros skip this step).
|
||||
22. Check ping6 is available between instances. (for OS cirros skip this step).
|
||||
23. Delete SG1 and SG2 security groups.
|
||||
24. Attach instances to default security group.
|
||||
25. Check ping is available between instances.
|
||||
25. Check that instances can ping each other.
|
||||
26. Check ssh is available between instances.
|
||||
|
||||
|
||||
|
@ -556,7 +556,7 @@ Steps
|
|||
|
||||
1. Set up for system tests.
|
||||
2. Log in to Horizon Dashboard.
|
||||
3. Launch 2 instances on each hypervisors.
|
||||
3. Launch 2 instances on each hypervisor (one in vcenter az and another one in nova az).
|
||||
4. Verify that traffic can be successfully sent from and received on the MAC and IP address associated with the logical port.
|
||||
5. Configure a new IP address on the instance associated with the logical port.
|
||||
6. Confirm that the instance cannot communicate with that IP address.
|
||||
|
@ -616,7 +616,7 @@ Steps
|
|||
* network: net1 with ip 10.0.0.5
|
||||
* SG: SG_1
|
||||
10. In tenant 'test_2' create net2 and subnet2 with CIDR 10.0.0.0/24.
|
||||
11. In tenant 'test_2' Create Router 'router_2' with external floating network
|
||||
11. In tenant 'test_2' create Router 'router_2' with external floating network
|
||||
12. In tenant 'test_2' attach interface of net2, subnet2 to router_2
|
||||
13. In tenant "test_2" create security group "SG_2" and add rule that allows ingress icmp traffic.
|
||||
14. In tenant "test_2" launch instance:
|
||||
|
@ -633,16 +633,16 @@ Steps
|
|||
* flavor: m1.micro
|
||||
* network: net2 with ip 10.0.0.5
|
||||
* SG: SG_2
|
||||
16. Assign floating ips for each instance
|
||||
17. Check instances in tenant_1 communicate between each other by internal ip
|
||||
18. Check instances in tenant_2 communicate between each other by internal ip
|
||||
16. Assign floating ips for each instance.
|
||||
17. Check instances in tenant_1 communicate between each other by internal ip.
|
||||
18. Check instances in tenant_2 communicate between each other by internal ip.
|
||||
19. Check instances in different tenants communicate between each other by floating ip.
|
||||
|
||||
|
||||
Expected result
|
||||
###############
|
||||
|
||||
Pings should get a response.
|
||||
Pings should get a response.
|
||||
|
||||
|
||||
Check creation instance in the one group simultaneously.
|
||||
|
@ -671,11 +671,11 @@ Steps
|
|||
#####
|
||||
|
||||
1. Set up for system tests.
|
||||
2. Navigate to Project -> Compute -> Instances
|
||||
2. Navigate to Project -> Compute -> Instances.
|
||||
3. Launch few instances simultaneously with image TestVM and flavor m1.micro in nova availability zone in default internal network.
|
||||
4. Launch few instances simultaneously with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in default internal network.
|
||||
4. Launch few instances simultaneously with image TestVM-VMDK and flavor m1.micro in vcenter availability zone in default internal network.
|
||||
5. Check connection between instances (ping, ssh).
|
||||
6. Delete all instances from horizon simultaneously.
|
||||
6. Delete all instances from Horizon simultaneously.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -726,7 +726,7 @@ Steps
|
|||
7. Configure VMware vCenter Settings. Add 1 vSphere clusters and configure Nova Compute instances on controllers.
|
||||
8. Verify networks.
|
||||
9. Deploy cluster.
|
||||
10. Create instances for each of hypervisor's type
|
||||
10. Create instances for each of hypervisor's type
|
||||
11. Create 2 volumes each in his own availability zone.
|
||||
12. Attach each volume to his instance.
|
||||
|
||||
|
@ -800,17 +800,19 @@ Steps
|
|||
1. Upload plugins to the master node.
|
||||
2. Install plugin.
|
||||
3. Create cluster with vcenter.
|
||||
4. Set CephOSD as backend for Glance and Cinder
|
||||
4. Set CephOSD as backend for Glance and Cinder.
|
||||
5. Add nodes with following roles:
|
||||
controller
|
||||
compute-vmware
|
||||
compute-vmware
|
||||
compute
|
||||
3 ceph-osd
|
||||
* Controller
|
||||
* Compute-VMware
|
||||
* Compute-VMware
|
||||
* Compute
|
||||
* Ceph-OSD
|
||||
* Ceph-OSD
|
||||
* Ceph-OSD
|
||||
6. Upload network template.
|
||||
7. Check network configuration.
|
||||
8. Deploy the cluster
|
||||
9. Run OSTF
|
||||
8. Deploy the cluster.
|
||||
9. Run OSTF.
|
||||
|
||||
|
||||
Expected result
|
||||
|
@ -832,8 +834,7 @@ dvs_vcenter_remote_sg
|
|||
Description
|
||||
###########
|
||||
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups
|
||||
rules.
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups rules.
|
||||
|
||||
|
||||
Complexity
|
||||
|
@ -859,41 +860,41 @@ Steps
|
|||
SG_man
|
||||
SG_DNS
|
||||
6. Add rules to SG_web:
|
||||
Ingress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0
|
||||
Ingress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_db'
|
||||
Ingress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man
|
||||
Engress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0
|
||||
Egress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_db'
|
||||
Egress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man
|
||||
Ingress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
|
||||
Ingress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_db'
|
||||
Ingress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man
|
||||
Egress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
|
||||
Egress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_db'
|
||||
Egress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man
|
||||
7. Add rules to SG_db:
|
||||
Egress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0
|
||||
Egress rule with ip protocol 'https ' , port range 443-443, ip range 0.0.0.0/0
|
||||
Ingress rule with ip protocol 'http' , port range 80-80, ip range 0.0.0.0/0
|
||||
Ingress rule with ip protocol 'https ' , port range 443-443, ip range 0.0.0.0/0
|
||||
Ingress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_web'
|
||||
Ingress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man'
|
||||
Egress rule with ip protocol 'tcp' , port range 3306-3306, SG group 'SG_web'
|
||||
Egress rule with ip protocol 'tcp' , port range 22-22, SG group 'SG_man'
|
||||
Egress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
|
||||
Egress rule with ip protocol 'https', port range 443-443, ip range 0.0.0.0/0
|
||||
Ingress rule with ip protocol 'http', port range 80-80, ip range 0.0.0.0/0
|
||||
Ingress rule with ip protocol 'https', port range 443-443, ip range 0.0.0.0/0
|
||||
Ingress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_web'
|
||||
Ingress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man'
|
||||
Egress rule with ip protocol 'tcp', port range 3306-3306, SG group 'SG_web'
|
||||
Egress rule with ip protocol 'tcp', port range 22-22, SG group 'SG_man'
|
||||
8. Add rules to SG_DNS:
|
||||
Ingress rule with ip protocol 'udp ' , port range 53-53, ip-prefix 'ip DNS server'
|
||||
Egress rule with ip protocol 'udp ' , port range 53-53, ip-prefix 'ip DNS server'
|
||||
Ingress rule with ip protocol 'tcp' , port range 53-53, ip-prefix 'ip DNS server'
|
||||
Egress rule with ip protocol 'tcp' , port range 53-53, ip-prefix 'ip DNS server'
|
||||
Ingress rule with ip protocol 'udp', port range 53-53, ip-prefix 'ip DNS server'
|
||||
Egress rule with ip protocol 'udp', port range 53-53, ip-prefix 'ip DNS server'
|
||||
Ingress rule with ip protocol 'tcp', port range 53-53, ip-prefix 'ip DNS server'
|
||||
Egress rule with ip protocol 'tcp', port range 53-53, ip-prefix 'ip DNS server'
|
||||
9. Add rules to SG_man:
|
||||
Ingress rule with ip protocol 'tcp' , port range 22-22, ip range 0.0.0.0/0
|
||||
Egress rule with ip protocol 'tcp' , port range 22-22, ip range 0.0.0.0/0
|
||||
Ingress rule with ip protocol 'tcp', port range 22-22, ip range 0.0.0.0/0
|
||||
Egress rule with ip protocol 'tcp', port range 22-22, ip range 0.0.0.0/0
|
||||
10. Launch following instances in net_1 from image 'ubuntu':
|
||||
instance 'webserver' of vcenter az with SG_web, SG_DNS
|
||||
instance 'mysqldb ' of vcenter az with SG_db, SG_DNS
|
||||
instance 'manage' of nova az with SG_man, SG_DNS
|
||||
11. Verify that traffic is enabled to instance 'webserver' from internet by http port 80.
|
||||
12. Verify that traffic is enabled to instance 'webserver' from VM 'manage' by tcp port 22.
|
||||
11. Verify that traffic is enabled to instance 'webserver' from external network by http port 80.
|
||||
12. Verify that traffic is enabled to instance 'webserver' from VM 'manage' by tcp port 22.
|
||||
13. Verify that traffic is enabled to instance 'webserver' from VM 'mysqldb' by tcp port 3306.
|
||||
14. Verify that traffic is enabled to internet from instance ' mysqldb' by https port 443.
|
||||
15. Verify that traffic is enabled to instance ' mysqldb' from VM 'manage' by tcp port 22.
|
||||
16. Verify that traffic is enabled to instance ' manage' from internet by tcp port 22.
|
||||
17. Verify that traffic is not enabled to instance ' webserver' from internet by tcp port 22.
|
||||
18. Verify that traffic is not enabled to instance ' mysqldb' from internet by tcp port 3306.
|
||||
14. Verify that traffic is enabled to internet from instance 'mysqldb' by https port 443.
|
||||
15. Verify that traffic is enabled to instance 'mysqldb' from VM 'manage' by tcp port 22.
|
||||
16. Verify that traffic is enabled to instance 'manage' from internet by tcp port 22.
|
||||
17. Verify that traffic is not enabled to instance 'webserver' from internet by tcp port 22.
|
||||
18. Verify that traffic is not enabled to instance 'mysqldb' from internet by tcp port 3306.
|
||||
19. Verify that traffic is not enabled to instance 'manage' from internet by http port 80.
|
||||
20. Verify that traffic is enabled to all instances from DNS server by udp/tcp port 53 and vice versa.
|
||||
|
||||
|
@ -901,8 +902,7 @@ Steps
|
|||
Expected result
|
||||
###############
|
||||
|
||||
Network traffic is allowed/prohibited to instances according security groups
|
||||
rules.
|
||||
Network traffic is allowed/prohibited to instances according security groups rules.
|
||||
|
||||
|
||||
Security group rules with remote group id simple.
|
||||
|
@ -918,8 +918,7 @@ dvs_remote_sg_simple
|
|||
Description
|
||||
###########
|
||||
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups
|
||||
rules.
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups rules.
|
||||
|
||||
|
||||
Complexity
|
||||
|
@ -947,16 +946,15 @@ Steps
|
|||
Launch 2 instance of nova az with SG1 in net1.
|
||||
8. Launch 2 instance of vcenter az with SG2 in net1.
|
||||
Launch 2 instance of nova az with SG2 in net1.
|
||||
9. Verify that icmp ping is enabled between VMs from SG1.
|
||||
10. Verify that icmp ping is enabled between instances from SG2.
|
||||
11. Verify that icmp ping is not enabled between instances from SG1 and VMs from SG2.
|
||||
9. Check that instances from SG1 can ping each other.
|
||||
10. Check that instances from SG2 can ping each other.
|
||||
11. Check that instances from SG1 can not ping instances from SG2 and vice versa.
|
||||
|
||||
|
||||
Expected result
|
||||
###############
|
||||
|
||||
Network traffic is allowed/prohibited to instances according security groups
|
||||
rules.
|
||||
Network traffic is allowed/prohibited to instances according security groups rules.
|
||||
|
||||
|
||||
Check attached/detached ports with security groups.
|
||||
|
@ -1007,8 +1005,7 @@ Steps
|
|||
Expected result
|
||||
###############
|
||||
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups
|
||||
rules.
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups rules.
|
||||
|
||||
|
||||
Check launch and remove instances in the one group simultaneously with few security groups.
|
||||
|
@ -1043,23 +1040,23 @@ Steps
|
|||
Egress rule with ip protocol 'icmp', port range any, SG group 'SG1'
|
||||
Ingress rule with ssh protocol 'tcp', port range 22, SG group 'SG1'
|
||||
Egress rule with ssh protocol 'tcp', port range 22, SG group 'SG1'
|
||||
4. Create security Sg2 group with rules:
|
||||
4. Create security SG2 group with rules:
|
||||
Ingress rule with ssh protocol 'tcp', port range 22, SG group 'SG2'
|
||||
Egress rule with ssh protocol 'tcp', port range 22, SG group 'SG2'
|
||||
5. Launch a few instances of vcenter availability zone with Default SG +SG1+SG2 in net1 in one batch.
|
||||
6. Launch a few instances of nova availability zone with Default SG +SG1+SG2 in net1 in one batch.
|
||||
5. Launch a few instances of vcenter availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
|
||||
6. Launch a few instances of nova availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
|
||||
7. Verify that icmp/ssh is enabled between instances.
|
||||
8. Remove all instances.
|
||||
9. Launch a few instances of nova availability zone with Default SG +SG1+SG2 in net1 in one batch.
|
||||
10. Launch a few instances of vcenter availability zone with Default SG +SG1+SG2 in net1 in one batch.
|
||||
9. Launch a few instances of nova availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
|
||||
10. Launch a few instances of vcenter availability zone with Default SG + SG1 + SG2 in net_1 in one batch.
|
||||
11. Verify that icmp/ssh is enabled between instances.
|
||||
12. Remove all instances.
|
||||
|
||||
|
||||
Expected result
|
||||
###############
|
||||
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups
|
||||
rules.
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups rules.
|
||||
|
||||
|
||||
Security group rules with remote ip prefix.
|
||||
|
@ -1102,18 +1099,17 @@ Steps
|
|||
Ingress rule with ip protocol 'tcp', port range any, <internal ip of VM2>
|
||||
Egress rule with ip protocol 'tcp', port range any, <internal ip of VM2>
|
||||
9. Launch 2 instance 'VM3' and 'VM4' of vcenter az with SG1 and SG2 in net1.
|
||||
Launch 2 instance 'VM5' and 'VM6' of nova az with SG1 and SG2 in net1.
|
||||
10. Verify that icmp ping is enabled from 'VM3', 'VM4' , 'VM5' and 'VM6' to VM1 and vice versa.
|
||||
11. Verify that icmp ping is blocked between 'VM3', 'VM4' , 'VM5' and 'VM6' and vice versa.
|
||||
12. Verify that ssh is enabled from 'VM3', 'VM4' , 'VM5' and 'VM6' to VM2 and vice versa.
|
||||
13. Verify that ssh is blocked between 'VM3', 'VM4' , 'VM5' and 'VM6' and vice versa.
|
||||
Launch 2 instance 'VM5' and 'VM6' of nova az with SG1 and SG2 in net1.
|
||||
10. Check that instances 'VM3', 'VM4', 'VM5' and 'VM6' can ping VM1 and vice versa.
|
||||
11. Check that instances 'VM3', 'VM4', 'VM5' and 'VM6' can not ping each other Verify that icmp ping is blocked between and vice versa.
|
||||
12. Verify that ssh is enabled from 'VM3', 'VM4', 'VM5' and 'VM6' to VM2 and vice versa.
|
||||
13. Verify that ssh is blocked between 'VM3', 'VM4', 'VM5' and 'VM6' and vice versa.
|
||||
|
||||
|
||||
Expected result
|
||||
###############
|
||||
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups
|
||||
rules.
|
||||
Verify that network traffic is allowed/prohibited to instances according security groups rules.
|
||||
|
||||
|
||||
Fuel create mirror and update core repos on cluster with DVS
|
||||
|
@ -1143,14 +1139,14 @@ Steps
|
|||
|
||||
1. Setup for system tests
|
||||
2. Log into controller node via Fuel CLI and get PID of services which were
|
||||
launched by plugin and store them.
|
||||
launched by plugin and store them.
|
||||
3. Launch the following command on the Fuel Master node:
|
||||
`fuel-mirror create -P ubuntu -G mos ubuntu`
|
||||
`fuel-mirror create -P ubuntu -G mos ubuntu`
|
||||
4. Run the command below on the Fuel Master node:
|
||||
`fuel-mirror apply -P ubuntu -G mos ubuntu --env <env_id> --replace`
|
||||
`fuel-mirror apply -P ubuntu -G mos ubuntu --env <env_id> --replace`
|
||||
5. Run the command below on the Fuel Master node:
|
||||
`fuel --env <env_id> node --node-id <node_ids_separeted_by_coma> --tasks setup_repositories`
|
||||
And wait until task is done.
|
||||
`fuel --env <env_id> node --node-id <node_ids_separeted_by_coma> --tasks setup_repositories`
|
||||
And wait until task is done.
|
||||
6. Log into controller node and check plugins services are alive and their PID are not changed.
|
||||
7. Check all nodes remain in ready status.
|
||||
8. Rerun OSTF.
|
||||
|
@ -1163,8 +1159,8 @@ Cluster (nodes) should remain in ready state.
|
|||
OSTF test should be passed on rerun
|
||||
|
||||
|
||||
Modifying env with DVS plugin(removing/adding controller)
|
||||
---------------------------------------------------------
|
||||
Modifying env with DVS plugin (removing/adding controller)
|
||||
----------------------------------------------------------
|
||||
|
||||
ID
|
||||
##
|
||||
|
@ -1186,7 +1182,7 @@ core
|
|||
Steps
|
||||
#####
|
||||
|
||||
1. Install DVS plugin
|
||||
1. Install DVS plugin.
|
||||
2. Create a new environment with following parameters:
|
||||
* Compute: KVM/QEMU with vCenter
|
||||
* Networking: Neutron with VLAN segmentation + Neutron with DVS
|
||||
|
@ -1202,15 +1198,15 @@ Steps
|
|||
5. Configure DVS plugin.
|
||||
6. Configure VMware vCenter Settings.
|
||||
7. Verify networks.
|
||||
8. Deploy changes
|
||||
9. Run OSTF
|
||||
8. Deploy changes.
|
||||
9. Run OSTF.
|
||||
10. Remove controller on which DVS agent is run.
|
||||
11. Deploy changes
|
||||
12. Rerun OSTF
|
||||
13. Add 1 nodes with controller role to the cluster
|
||||
14. Verify networks
|
||||
15. Redeploy changes
|
||||
16. Rerun OSTF
|
||||
11. Deploy changes.
|
||||
12. Rerun OSTF.
|
||||
13. Add 1 nodes with controller role to the cluster.
|
||||
14. Verify networks.
|
||||
15. Redeploy changes.
|
||||
16. Rerun OSTF.
|
||||
|
||||
Expected result
|
||||
###############
|
||||
|
@ -1242,14 +1238,14 @@ Steps
|
|||
#####
|
||||
|
||||
1. Set up for system tests.
|
||||
2. Remove compute from the cluster
|
||||
3. Verify networks
|
||||
4. Deploy changes
|
||||
5. Rerun OSTF
|
||||
6. Add 1 node with compute role to the cluster
|
||||
7. Verify networks
|
||||
8. Redeploy changes
|
||||
9. Rerun OSTF
|
||||
2. Remove compute from the cluster.
|
||||
3. Verify networks.
|
||||
4. Deploy changes.
|
||||
5. Rerun OSTF.
|
||||
6. Add 1 node with compute role to the cluster.
|
||||
7. Verify networks.
|
||||
8. Redeploy changes.
|
||||
9. Rerun OSTF.
|
||||
|
||||
Expected result
|
||||
###############
|
||||
|
@ -1280,7 +1276,7 @@ core
|
|||
Steps
|
||||
#####
|
||||
|
||||
1. Install DVS plugin
|
||||
1. Install DVS plugin.
|
||||
2. Create a new environment with following parameters:
|
||||
* Compute: KVM/QEMU with vCenter
|
||||
* Networking: Neutron with VLAN segmentation
|
||||
|
@ -1297,10 +1293,10 @@ Steps
|
|||
8. Add 1 node with compute-vmware role,configure Nova Compute instance on compute-vmware and redeploy cluster.
|
||||
9. Verify that previously created instance is working.
|
||||
10. Run OSTF tests.
|
||||
11. Delete compute-vmware
|
||||
12. Redeploy changes
|
||||
11. Delete compute-vmware.
|
||||
12. Redeploy changes.
|
||||
13. Verify that previously created instance is working.
|
||||
14. Run OSTF
|
||||
14. Run OSTF.
|
||||
|
||||
Expected result
|
||||
###############
|
||||
|
|
|
@ -117,33 +117,33 @@ Target Test Items
|
|||
* Install/uninstall Fuel Vmware-DVS plugin
|
||||
* Deploy Cluster with Fuel Vmware-DVS plugin by Fuel
|
||||
* Roles of nodes
|
||||
* controller
|
||||
* compute
|
||||
* cinder
|
||||
* mongo
|
||||
* compute-vmware
|
||||
* cinder-vmware
|
||||
* Controller
|
||||
* Compute
|
||||
* Cinder
|
||||
* Mongo
|
||||
* Compute-VMware
|
||||
* Cinder-VMware
|
||||
* Hypervisors:
|
||||
* KVM+Vcenter
|
||||
* Qemu+Vcenter
|
||||
* KVM + vCenter
|
||||
* Qemu + vCenter
|
||||
* Storage:
|
||||
* Ceph
|
||||
* Cinder
|
||||
* VMWare vCenter/ESXi datastore for images
|
||||
* Network
|
||||
* Neutron with Vlan segmentation
|
||||
* Neutron with VLAN segmentation
|
||||
* HA + Neutron with VLAN
|
||||
* Additional components
|
||||
* Ceilometer
|
||||
* Health Check
|
||||
* Upgrade master node
|
||||
* MOS and VMware-DVS plugin
|
||||
* Computes(Nova)
|
||||
* Computes (Nova)
|
||||
* Launch and manage instances
|
||||
* Launch instances in batch
|
||||
* Networks (Neutron)
|
||||
* Create and manage public and private networks.
|
||||
* Create and manage routers.
|
||||
* Create and manage public and private networks
|
||||
* Create and manage routers
|
||||
* Port binding / disabling
|
||||
* Port security
|
||||
* Security groups
|
||||
|
@ -158,7 +158,7 @@ Target Test Items
|
|||
* Create and manage projects
|
||||
* Create and manage users
|
||||
* Glance
|
||||
* Create and manage images
|
||||
* Create and manage images
|
||||
* GUI
|
||||
* Fuel UI
|
||||
* CLI
|
||||
|
@ -168,13 +168,13 @@ Target Test Items
|
|||
Test approach
|
||||
*************
|
||||
|
||||
The project test approach consists of Smoke, Integration, System, Regression
|
||||
Failover and Acceptance test levels.
|
||||
The project test approach consists of Smoke, Integration, System, Regression
|
||||
Failover and Acceptance test levels.
|
||||
|
||||
**Smoke testing**
|
||||
|
||||
The goal of smoke testing is to ensure that the most critical features of Fuel
|
||||
VMware DVS plugin work after new build delivery. Smoke tests will be used by
|
||||
VMware DVS plugin work after new build delivery. Smoke tests will be used by
|
||||
QA to accept software builds from Development team.
|
||||
|
||||
**Integration and System testing**
|
||||
|
@ -185,8 +185,8 @@ without gaps in dataflow.
|
|||
|
||||
**Regression testing**
|
||||
|
||||
The goal of regression testing is to verify that key features of Fuel VMware
|
||||
DVS plugin are not affected by any changes performed during preparation to
|
||||
The goal of regression testing is to verify that key features of Fuel VMware
|
||||
DVS plugin are not affected by any changes performed during preparation to
|
||||
release (includes defects fixing, new features introduction and possible
|
||||
updates).
|
||||
|
||||
|
@ -199,7 +199,7 @@ malfunctions with undue loss of data or data integrity.
|
|||
**Acceptance testing**
|
||||
|
||||
The goal of acceptance testing is to ensure that Fuel VMware DVS plugin has
|
||||
reached a level of stability that meets requirements and acceptance criteria.
|
||||
reached a level of stability that meets requirements and acceptance criteria.
|
||||
|
||||
|
||||
***********************
|
||||
|
@ -256,7 +256,7 @@ Project testing activities are to be resulted in the following reporting documen
|
|||
Acceptance criteria
|
||||
===================
|
||||
|
||||
* All acceptance criteria for user stories are met.
|
||||
* All acceptance criteria for user stories are met
|
||||
* All test cases are executed. BVT tests are passed
|
||||
* Critical and high issues are fixed
|
||||
* All required documents are delivered
|
||||
|
@ -268,4 +268,4 @@ Test cases
|
|||
|
||||
.. include:: test_suite_smoke.rst
|
||||
.. include:: test_suite_system.rst
|
||||
.. include:: test_suite_failover.rst
|
||||
.. include:: test_suite_failover.rst
|
||||
|
|
|
@ -40,7 +40,7 @@ TestBasic = fuelweb_test.tests.base_test_case.TestBasic
|
|||
SetupEnvironment = fuelweb_test.tests.base_test_case.SetupEnvironment
|
||||
|
||||
|
||||
@test(groups=["plugins", 'dvs_vcenter_plugin', 'dvs_vcenter_system',
|
||||
@test(groups=['plugins', 'dvs_vcenter_plugin', 'dvs_vcenter_system',
|
||||
'dvs_vcenter_destructive'])
|
||||
class TestDVSDestructive(TestBasic):
|
||||
"""Failover test suite.
|
||||
|
@ -59,19 +59,17 @@ class TestDVSDestructive(TestBasic):
|
|||
cmds = ['nova-manage service list | grep vcenter-vmcluster1',
|
||||
'nova-manage service list | grep vcenter-vmcluster2']
|
||||
|
||||
networks = [
|
||||
{'name': 'net_1',
|
||||
'subnets': [
|
||||
{'name': 'subnet_1',
|
||||
'cidr': '192.168.112.0/24'}
|
||||
]
|
||||
},
|
||||
{'name': 'net_2',
|
||||
'subnets': [
|
||||
{'name': 'subnet_1',
|
||||
'cidr': '192.168.113.0/24'}
|
||||
]
|
||||
}
|
||||
networks = [{
|
||||
'name': 'net_1',
|
||||
'subnets': [
|
||||
{'name': 'subnet_1',
|
||||
'cidr': '192.168.112.0/24'}
|
||||
]}, {
|
||||
'name': 'net_2',
|
||||
'subnets': [
|
||||
{'name': 'subnet_1',
|
||||
'cidr': '192.168.113.0/24'}
|
||||
]}
|
||||
]
|
||||
|
||||
# defaults
|
||||
|
@ -90,47 +88,47 @@ class TestDVSDestructive(TestBasic):
|
|||
|
||||
:param openstack_ip: type string, openstack ip
|
||||
"""
|
||||
admin = os_actions.OpenStackActions(
|
||||
os_conn = os_actions.OpenStackActions(
|
||||
openstack_ip, SERVTEST_USERNAME,
|
||||
SERVTEST_PASSWORD,
|
||||
SERVTEST_TENANT)
|
||||
|
||||
# create security group with rules for ssh and ping
|
||||
security_group = admin.create_sec_group_for_ssh()
|
||||
# Create security group with rules for ssh and ping
|
||||
security_group = os_conn.create_sec_group_for_ssh()
|
||||
|
||||
default_sg = [
|
||||
sg
|
||||
for sg in admin.neutron.list_security_groups()['security_groups']
|
||||
if sg['tenant_id'] == admin.get_tenant(SERVTEST_TENANT).id
|
||||
if sg['name'] == 'default'][0]
|
||||
_sec_groups = os_conn.neutron.list_security_groups()['security_groups']
|
||||
_serv_tenant_id = os_conn.get_tenant(SERVTEST_TENANT).id
|
||||
default_sg = [sg for sg in _sec_groups
|
||||
if sg['tenant_id'] == _serv_tenant_id and
|
||||
sg['name'] == 'default'][0]
|
||||
|
||||
network = admin.nova.networks.find(label=self.inter_net_name)
|
||||
network = os_conn.nova.networks.find(label=self.inter_net_name)
|
||||
|
||||
# create access point server
|
||||
access_point, access_point_ip = openstack.create_access_point(
|
||||
os_conn=admin, nics=[{'net-id': network.id}],
|
||||
# Create access point server
|
||||
_, access_point_ip = openstack.create_access_point(
|
||||
os_conn=os_conn,
|
||||
nics=[{'net-id': network.id}],
|
||||
security_groups=[security_group.name, default_sg['name']])
|
||||
|
||||
self.show_step(11)
|
||||
self.show_step(12)
|
||||
instances = openstack.create_instances(
|
||||
os_conn=admin, nics=[{'net-id': network.id}],
|
||||
os_conn=os_conn,
|
||||
nics=[{'net-id': network.id}],
|
||||
vm_count=1,
|
||||
security_groups=[default_sg['name']])
|
||||
openstack.verify_instance_state(admin)
|
||||
openstack.verify_instance_state(os_conn)
|
||||
|
||||
# Get private ips of instances
|
||||
ips = []
|
||||
for instance in instances:
|
||||
ips.append(admin.get_nova_instance_ip(
|
||||
instance, net_name=self.inter_net_name))
|
||||
ips = [os_conn.get_nova_instance_ip(i, net_name=self.inter_net_name)
|
||||
for i in instances]
|
||||
time.sleep(30)
|
||||
self.show_step(13)
|
||||
openstack.ping_each_other(ips=ips, access_point_ip=access_point_ip)
|
||||
|
||||
self.show_step(14)
|
||||
vcenter_name = [
|
||||
name for name in self.WORKSTATION_NODES if 'vcenter' in name].pop()
|
||||
vcenter_name = [name for name in self.WORKSTATION_NODES
|
||||
if 'vcenter' in name].pop()
|
||||
node = vmrun.Vmrun(
|
||||
self.host_type,
|
||||
self.path_to_vmx_file.format(vcenter_name),
|
||||
|
@ -143,13 +141,13 @@ class TestDVSDestructive(TestBasic):
|
|||
wait(lambda: not icmp_ping(self.VCENTER_IP),
|
||||
interval=1,
|
||||
timeout=10,
|
||||
timeout_msg='Vcenter is still available.')
|
||||
timeout_msg='vCenter is still available.')
|
||||
|
||||
self.show_step(16)
|
||||
wait(lambda: icmp_ping(self.VCENTER_IP),
|
||||
interval=5,
|
||||
timeout=120,
|
||||
timeout_msg='Vcenter is not available.')
|
||||
timeout_msg='vCenter is not available.')
|
||||
|
||||
self.show_step(17)
|
||||
openstack.ping_each_other(ips=ips, access_point_ip=access_point_ip)
|
||||
|
@ -163,7 +161,7 @@ class TestDVSDestructive(TestBasic):
|
|||
Scenario:
|
||||
1. Revert snapshot to dvs_vcenter_systest_setup.
|
||||
2. Try to uninstall dvs plugin.
|
||||
3. Check that plugin is not removed
|
||||
3. Check that plugin is not removed.
|
||||
|
||||
Duration: 1.8 hours
|
||||
|
||||
|
@ -178,12 +176,13 @@ class TestDVSDestructive(TestBasic):
|
|||
self.ssh_manager.execute_on_remote(
|
||||
ip=self.ssh_manager.admin_ip,
|
||||
cmd=cmd,
|
||||
assert_ec_equal=[1]
|
||||
)
|
||||
assert_ec_equal=[1])
|
||||
|
||||
self.show_step(3)
|
||||
output = self.ssh_manager.execute_on_remote(
|
||||
ip=self.ssh_manager.admin_ip, cmd='fuel plugins list')['stdout']
|
||||
ip=self.ssh_manager.admin_ip,
|
||||
cmd='fuel plugins list'
|
||||
)['stdout']
|
||||
assert_true(plugin.plugin_name in output[-1].split(' '),
|
||||
"Plugin '{0}' was removed".format(plugin.plugin_name))
|
||||
|
||||
|
@ -194,19 +193,18 @@ class TestDVSDestructive(TestBasic):
|
|||
"""Check abilities to bind port on DVS to VM, disable/enable this port.
|
||||
|
||||
Scenario:
|
||||
1. Revert snapshot to dvs_vcenter_systest_setup
|
||||
1. Revert snapshot to dvs_vcenter_systest_setup.
|
||||
2. Create private networks net01 with subnet.
|
||||
3. Launch instance VM_1 in the net01
|
||||
with image TestVM and flavor m1.micro in nova az.
|
||||
4. Launch instance VM_2 in the net01
|
||||
with image TestVM-VMDK and flavor m1.micro in vcenter az.
|
||||
3. Launch instance VM_1 in the net01 with
|
||||
image TestVM and flavor m1.micro in nova az.
|
||||
4. Launch instance VM_2 in the net01 with
|
||||
image TestVM-VMDK and flavor m1.micro in vcenter az.
|
||||
5. Disable sub_net port of instances.
|
||||
6. Check instances are not available.
|
||||
7. Enable sub_net port of all instances.
|
||||
8. Verify that instances communicate between each other.
|
||||
Send icmp ping between instances.
|
||||
|
||||
|
||||
Duration: 1,5 hours
|
||||
|
||||
"""
|
||||
|
@ -221,22 +219,20 @@ class TestDVSDestructive(TestBasic):
|
|||
SERVTEST_PASSWORD,
|
||||
SERVTEST_TENANT)
|
||||
|
||||
# create security group with rules for ssh and ping
|
||||
# Create security group with rules for ssh and ping
|
||||
security_group = os_conn.create_sec_group_for_ssh()
|
||||
|
||||
self.show_step(2)
|
||||
net = self.networks[0]
|
||||
network = os_conn.create_network(network_name=net['name'])['network']
|
||||
net_1 = os_conn.create_network(network_name=net['name'])['network']
|
||||
|
||||
subnet = os_conn.create_subnet(
|
||||
subnet_name=net['subnets'][0]['name'],
|
||||
network_id=network['id'],
|
||||
network_id=net_1['id'],
|
||||
cidr=net['subnets'][0]['cidr'])
|
||||
|
||||
logger.info("Check network was created.")
|
||||
assert_true(
|
||||
os_conn.get_network(network['name'])['id'] == network['id']
|
||||
)
|
||||
assert_true(os_conn.get_network(net_1['name'])['id'] == net_1['id'])
|
||||
|
||||
logger.info("Add net_1 to default router")
|
||||
router = os_conn.get_router(os_conn.get_network(self.ext_net_name))
|
||||
|
@ -246,42 +242,37 @@ class TestDVSDestructive(TestBasic):
|
|||
self.show_step(3)
|
||||
self.show_step(4)
|
||||
instances = openstack.create_instances(
|
||||
os_conn=os_conn, nics=[{'net-id': network['id']}], vm_count=1,
|
||||
security_groups=[security_group.name]
|
||||
)
|
||||
os_conn=os_conn,
|
||||
nics=[{'net-id': net_1['id']}],
|
||||
vm_count=1,
|
||||
security_groups=[security_group.name])
|
||||
openstack.verify_instance_state(os_conn)
|
||||
|
||||
ports = os_conn.neutron.list_ports()['ports']
|
||||
fips = openstack.create_and_assign_floating_ips(os_conn, instances)
|
||||
|
||||
inst_ips = [os_conn.get_nova_instance_ip(
|
||||
instance, net_name=network['name']) for instance in instances]
|
||||
inst_ips = [os_conn.get_nova_instance_ip(i, net_name=net_1['name'])
|
||||
for i in instances]
|
||||
inst_ports = [p for p in ports
|
||||
if p['fixed_ips'][0]['ip_address'] in inst_ips]
|
||||
|
||||
self.show_step(5)
|
||||
_body = {'port': {'admin_state_up': False}}
|
||||
for port in inst_ports:
|
||||
os_conn.neutron.update_port(
|
||||
port['id'], {'port': {'admin_state_up': False}}
|
||||
)
|
||||
os_conn.neutron.update_port(port=port['id'], body=_body)
|
||||
|
||||
self.show_step(6)
|
||||
# TODO(vgorin) create better solution for this step
|
||||
try:
|
||||
openstack.ping_each_other(fips)
|
||||
checker = 1
|
||||
except Exception as e:
|
||||
logger.info(e)
|
||||
checker = 0
|
||||
|
||||
if checker:
|
||||
else:
|
||||
fail('Ping is available between instances')
|
||||
|
||||
self.show_step(7)
|
||||
_body = {'port': {'admin_state_up': True}}
|
||||
for port in inst_ports:
|
||||
os_conn.neutron.update_port(
|
||||
port['id'], {'port': {'admin_state_up': True}}
|
||||
)
|
||||
os_conn.neutron.update_port(port=port['id'], body=_body)
|
||||
|
||||
self.show_step(8)
|
||||
openstack.ping_each_other(fips, timeout=90)
|
||||
|
@ -290,19 +281,19 @@ class TestDVSDestructive(TestBasic):
|
|||
groups=["dvs_destructive_setup_2"])
|
||||
@log_snapshot_after_test
|
||||
def dvs_destructive_setup_2(self):
|
||||
"""Verify that vmclusters should be migrate after reset controller.
|
||||
"""Verify that vmclusters migrate after reset controller.
|
||||
|
||||
Scenario:
|
||||
1. Upload plugins to the master node
|
||||
1. Upload plugins to the master node.
|
||||
2. Install plugin.
|
||||
3. Configure cluster with 2 vcenter clusters.
|
||||
4. Add 3 node with controller role.
|
||||
5. Add 2 node with compute role.
|
||||
6. Configure vcenter
|
||||
6. Configure vcenter.
|
||||
7. Deploy the cluster.
|
||||
8. Run smoke OSTF tests
|
||||
9. Launch instances. 1 per az. Assign floating ips.
|
||||
10. Make snapshot
|
||||
10. Make snapshot.
|
||||
|
||||
Duration: 1.8 hours
|
||||
Snapshot: dvs_destructive_setup_2
|
||||
|
@ -325,14 +316,12 @@ class TestDVSDestructive(TestBasic):
|
|||
|
||||
self.show_step(3)
|
||||
self.show_step(4)
|
||||
self.fuel_web.update_nodes(
|
||||
cluster_id,
|
||||
{'slave-01': ['controller'],
|
||||
'slave-02': ['controller'],
|
||||
'slave-03': ['controller'],
|
||||
'slave-04': ['compute'],
|
||||
'slave-05': ['compute']}
|
||||
)
|
||||
self.fuel_web.update_nodes(cluster_id,
|
||||
{'slave-01': ['controller'],
|
||||
'slave-02': ['controller'],
|
||||
'slave-03': ['controller'],
|
||||
'slave-04': ['compute'],
|
||||
'slave-05': ['compute']})
|
||||
self.show_step(6)
|
||||
self.fuel_web.vcenter_configure(cluster_id, multiclusters=True)
|
||||
|
||||
|
@ -340,8 +329,7 @@ class TestDVSDestructive(TestBasic):
|
|||
self.fuel_web.deploy_cluster_wait(cluster_id)
|
||||
|
||||
self.show_step(8)
|
||||
self.fuel_web.run_ostf(
|
||||
cluster_id=cluster_id, test_sets=['smoke'])
|
||||
self.fuel_web.run_ostf(cluster_id=cluster_id, test_sets=['smoke'])
|
||||
|
||||
self.show_step(9)
|
||||
os_ip = self.fuel_web.get_public_vip(cluster_id)
|
||||
|
@ -354,9 +342,10 @@ class TestDVSDestructive(TestBasic):
|
|||
|
||||
network = os_conn.nova.networks.find(label=self.inter_net_name)
|
||||
instances = openstack.create_instances(
|
||||
os_conn=os_conn, nics=[{'net-id': network.id}], vm_count=1,
|
||||
security_groups=[security_group.name]
|
||||
)
|
||||
os_conn=os_conn,
|
||||
nics=[{'net-id': network.id}],
|
||||
vm_count=1,
|
||||
security_groups=[security_group.name])
|
||||
openstack.verify_instance_state(os_conn)
|
||||
openstack.create_and_assign_floating_ips(os_conn, instances)
|
||||
|
||||
|
@ -367,16 +356,16 @@ class TestDVSDestructive(TestBasic):
|
|||
groups=["dvs_vcenter_reset_controller"])
|
||||
@log_snapshot_after_test
|
||||
def dvs_vcenter_reset_controller(self):
|
||||
"""Verify that vmclusters should be migrate after reset controller.
|
||||
"""Verify that vmclusters migrate after reset controller.
|
||||
|
||||
Scenario:
|
||||
1. Revert to 'dvs_destructive_setup_2' snapshot.
|
||||
2. Verify connection between instances. Send ping,
|
||||
check that ping get reply
|
||||
check that ping get reply.
|
||||
3. Reset controller.
|
||||
4. Check that vmclusters migrate to another controller.
|
||||
5. Verify connection between instances.
|
||||
Send ping, check that ping get reply
|
||||
5. Verify connection between instances. Send ping, check that
|
||||
ping get reply.
|
||||
|
||||
Duration: 1.8 hours
|
||||
|
||||
|
@ -393,11 +382,9 @@ class TestDVSDestructive(TestBasic):
|
|||
|
||||
self.show_step(2)
|
||||
srv_list = os_conn.get_servers()
|
||||
fips = []
|
||||
for srv in srv_list:
|
||||
fips.append(os_conn.get_nova_instance_ip(
|
||||
srv, net_name=self.inter_net_name, addrtype='floating'))
|
||||
|
||||
fips = [os_conn.get_nova_instance_ip(s, net_name=self.inter_net_name,
|
||||
addrtype='floating')
|
||||
for s in srv_list]
|
||||
openstack.ping_each_other(fips)
|
||||
|
||||
d_ctrl = self.fuel_web.get_nailgun_primary_node(
|
||||
|
@ -417,16 +404,16 @@ class TestDVSDestructive(TestBasic):
|
|||
groups=["dvs_vcenter_shutdown_controller"])
|
||||
@log_snapshot_after_test
|
||||
def dvs_vcenter_shutdown_controller(self):
|
||||
"""Verify that vmclusters should be migrate after shutdown controller.
|
||||
"""Verify that vmclusters migrate after shutdown controller.
|
||||
|
||||
Scenario:
|
||||
1. Revert to 'dvs_destructive_setup_2' snapshot.
|
||||
2. Verify connection between instances. Send ping,
|
||||
2. Verify connection between instances. Send ping,
|
||||
check that ping get reply.
|
||||
3. Shutdown controller.
|
||||
4. Check that vmclusters should be migrate to another controller.
|
||||
4. Check that vmclusters migrate to another controller.
|
||||
5. Verify connection between instances.
|
||||
Send ping, check that ping get reply
|
||||
Send ping, check that ping get reply
|
||||
|
||||
Duration: 1.8 hours
|
||||
|
||||
|
@ -443,10 +430,9 @@ class TestDVSDestructive(TestBasic):
|
|||
|
||||
self.show_step(2)
|
||||
srv_list = os_conn.get_servers()
|
||||
fips = []
|
||||
for srv in srv_list:
|
||||
fips.append(os_conn.get_nova_instance_ip(
|
||||
srv, net_name=self.inter_net_name, addrtype='floating'))
|
||||
fips = [os_conn.get_nova_instance_ip(
|
||||
srv, net_name=self.inter_net_name, addrtype='floating')
|
||||
for srv in srv_list]
|
||||
openstack.ping_each_other(fips)
|
||||
|
||||
n_ctrls = self.fuel_web.get_nailgun_cluster_nodes_by_roles(
|
||||
|
@ -469,7 +455,7 @@ class TestDVSDestructive(TestBasic):
|
|||
groups=["dvs_reboot_vcenter_1"])
|
||||
@log_snapshot_after_test
|
||||
def dvs_reboot_vcenter_1(self):
|
||||
"""Verify that vmclusters should be migrate after reset controller.
|
||||
"""Verify that vmclusters migrate after reset controller.
|
||||
|
||||
Scenario:
|
||||
1. Install DVS plugin on master node.
|
||||
|
@ -495,15 +481,14 @@ class TestDVSDestructive(TestBasic):
|
|||
and flavor m1.micro.
|
||||
12. Launch instance VM_2 with image TestVM-VMDK, availability zone
|
||||
vcenter and flavor m1.micro.
|
||||
13. Check connection between instances, send ping from VM_1 to VM_2
|
||||
and vice verse.
|
||||
13. Verify connection between instances: check that VM_1 and VM_2
|
||||
can ping each other.
|
||||
14. Reboot vcenter.
|
||||
15. Check that controller lost connection with vCenter.
|
||||
16. Wait for vCenter.
|
||||
17. Ensure that all instances from vCenter displayed in dashboard.
|
||||
18. Run OSTF.
|
||||
|
||||
|
||||
Duration: 2.5 hours
|
||||
|
||||
"""
|
||||
|
@ -525,13 +510,11 @@ class TestDVSDestructive(TestBasic):
|
|||
self.show_step(3)
|
||||
self.show_step(4)
|
||||
self.show_step(5)
|
||||
self.fuel_web.update_nodes(
|
||||
cluster_id,
|
||||
{'slave-01': ['controller'],
|
||||
'slave-02': ['compute'],
|
||||
'slave-03': ['cinder-vmware'],
|
||||
'slave-04': ['cinder']}
|
||||
)
|
||||
self.fuel_web.update_nodes(cluster_id,
|
||||
{'slave-01': ['controller'],
|
||||
'slave-02': ['compute'],
|
||||
'slave-03': ['cinder-vmware'],
|
||||
'slave-04': ['cinder']})
|
||||
|
||||
self.show_step(6)
|
||||
plugin.enable_plugin(cluster_id, self.fuel_web)
|
||||
|
@ -558,7 +541,7 @@ class TestDVSDestructive(TestBasic):
|
|||
groups=["dvs_reboot_vcenter_2"])
|
||||
@log_snapshot_after_test
|
||||
def dvs_reboot_vcenter_2(self):
|
||||
"""Verify that vmclusters should be migrate after reset controller.
|
||||
"""Verify that vmclusters migrate after reset controller.
|
||||
|
||||
Scenario:
|
||||
1. Install DVS plugin on master node.
|
||||
|
@ -585,15 +568,14 @@ class TestDVSDestructive(TestBasic):
|
|||
and flavor m1.micro.
|
||||
12. Launch instance VM_2 with image TestVM-VMDK, availability zone
|
||||
vcenter and flavor m1.micro.
|
||||
13. Check connection between instances, send ping from VM_1 to VM_2
|
||||
and vice verse.
|
||||
13. Verify connection between instances: check that VM_1 to VM_2
|
||||
can ping each other.
|
||||
14. Reboot vcenter.
|
||||
15. Check that controller lost connection with vCenter.
|
||||
16. Wait for vCenter.
|
||||
17. Ensure that all instances from vCenter displayed in dashboard.
|
||||
18. Run Smoke OSTF.
|
||||
|
||||
|
||||
Duration: 2.5 hours
|
||||
|
||||
"""
|
||||
|
@ -615,25 +597,21 @@ class TestDVSDestructive(TestBasic):
|
|||
self.show_step(3)
|
||||
self.show_step(4)
|
||||
self.show_step(5)
|
||||
self.fuel_web.update_nodes(
|
||||
cluster_id,
|
||||
{'slave-01': ['controller'],
|
||||
'slave-02': ['compute'],
|
||||
'slave-03': ['cinder-vmware'],
|
||||
'slave-04': ['cinder'],
|
||||
'slave-05': ['compute-vmware']}
|
||||
)
|
||||
self.fuel_web.update_nodes(cluster_id,
|
||||
{'slave-01': ['controller'],
|
||||
'slave-02': ['compute'],
|
||||
'slave-03': ['cinder-vmware'],
|
||||
'slave-04': ['cinder'],
|
||||
'slave-05': ['compute-vmware']})
|
||||
|
||||
self.show_step(6)
|
||||
plugin.enable_plugin(cluster_id, self.fuel_web)
|
||||
|
||||
self.show_step(7)
|
||||
target_node_1 = self.node_name('slave-05')
|
||||
self.fuel_web.vcenter_configure(
|
||||
cluster_id,
|
||||
target_node_1=target_node_1,
|
||||
multiclusters=False
|
||||
)
|
||||
self.fuel_web.vcenter_configure(cluster_id,
|
||||
target_node_1=target_node_1,
|
||||
multiclusters=False)
|
||||
|
||||
self.show_step(8)
|
||||
self.fuel_web.verify_network(cluster_id)
|
||||
|
|
|
@ -54,22 +54,16 @@ class TestDVSMaintenance(TestBasic):
|
|||
"""Deploy cluster with plugin and vmware datastore backend.
|
||||
|
||||
Scenario:
|
||||
1. Upload plugins to the master node
|
||||
2. Install plugin.
|
||||
3. Create cluster with vcenter.
|
||||
4. Add 3 node with controller role.
|
||||
5. Add 2 node with compute + ceph role.
|
||||
6. Add 1 node with compute-vmware + cinder vmware role.
|
||||
7. Deploy the cluster.
|
||||
8. Run OSTF.
|
||||
9. Create non default network.
|
||||
10. Create Security groups
|
||||
11. Launch instances with created network in nova and vcenter az.
|
||||
12. Attached created security groups to instances.
|
||||
13. Check connection between instances from different az.
|
||||
1. Revert to dvs_bvt snapshot.
|
||||
2. Create non default network net_1.
|
||||
3. Launch instances with created network in nova and vcenter az.
|
||||
4. Create Security groups.
|
||||
5. Attached created security groups to instances.
|
||||
6. Check connection between instances from different az.
|
||||
|
||||
Duration: 1.8 hours
|
||||
"""
|
||||
self.show_step(1)
|
||||
self.env.revert_snapshot("dvs_bvt")
|
||||
|
||||
cluster_id = self.fuel_web.get_last_created_cluster()
|
||||
|
@ -81,80 +75,81 @@ class TestDVSMaintenance(TestBasic):
|
|||
SERVTEST_TENANT)
|
||||
|
||||
tenant = os_conn.get_tenant(SERVTEST_TENANT)
|
||||
# Create non default network with subnet.
|
||||
|
||||
# Create non default network with subnet
|
||||
self.show_step(2)
|
||||
|
||||
logger.info('Create network {}'.format(self.net_data[0].keys()[0]))
|
||||
network = os_conn.create_network(
|
||||
net_1 = os_conn.create_network(
|
||||
network_name=self.net_data[0].keys()[0],
|
||||
tenant_id=tenant.id)['network']
|
||||
|
||||
subnet = os_conn.create_subnet(
|
||||
subnet_name=network['name'],
|
||||
network_id=network['id'],
|
||||
subnet_name=net_1['name'],
|
||||
network_id=net_1['id'],
|
||||
cidr=self.net_data[0][self.net_data[0].keys()[0]],
|
||||
ip_version=4)
|
||||
|
||||
# Check that network are created.
|
||||
assert_true(
|
||||
os_conn.get_network(network['name'])['id'] == network['id']
|
||||
)
|
||||
# Check that network are created
|
||||
assert_true(os_conn.get_network(net_1['name'])['id'] == net_1['id'])
|
||||
|
||||
# Add net_1 to default router
|
||||
router = os_conn.get_router(os_conn.get_network(self.ext_net_name))
|
||||
os_conn.add_router_interface(
|
||||
router_id=router["id"],
|
||||
subnet_id=subnet["id"])
|
||||
# Launch instance 2 VMs of vcenter and 2 VMs of nova
|
||||
# in the tenant network net_01
|
||||
openstack.create_instances(
|
||||
os_conn=os_conn, vm_count=1,
|
||||
nics=[{'net-id': network['id']}]
|
||||
)
|
||||
# Launch instance 2 VMs of vcenter and 2 VMs of nova
|
||||
# in the default network
|
||||
network = os_conn.nova.networks.find(label=self.inter_net_name)
|
||||
instances = openstack.create_instances(
|
||||
os_conn=os_conn, vm_count=1,
|
||||
nics=[{'net-id': network.id}])
|
||||
os_conn.add_router_interface(router_id=router["id"],
|
||||
subnet_id=subnet["id"])
|
||||
|
||||
self.show_step(3)
|
||||
|
||||
# Launch 2 vcenter VMs and 2 nova VMs in the tenant network net_01
|
||||
openstack.create_instances(os_conn=os_conn,
|
||||
vm_count=1,
|
||||
nics=[{'net-id': net_1['id']}])
|
||||
|
||||
# Launch 2 vcenter VMs and 2 nova VMs in the default network
|
||||
net_1 = os_conn.nova.networks.find(label=self.inter_net_name)
|
||||
instances = openstack.create_instances(os_conn=os_conn,
|
||||
vm_count=1,
|
||||
nics=[{'net-id': net_1.id}])
|
||||
openstack.verify_instance_state(os_conn)
|
||||
|
||||
self.show_step(4)
|
||||
|
||||
# Create security groups SG_1 to allow ICMP traffic.
|
||||
# Add Ingress rule for ICMP protocol to SG_1
|
||||
# Create security groups SG_2 to allow TCP traffic 22 port.
|
||||
# Add Ingress rule for TCP protocol to SG_2
|
||||
sec_name = ['SG1', 'SG2']
|
||||
sg1 = os_conn.nova.security_groups.create(
|
||||
sec_name[0], "descr")
|
||||
sg2 = os_conn.nova.security_groups.create(
|
||||
sec_name[1], "descr")
|
||||
rulesets = [
|
||||
{
|
||||
# ssh
|
||||
'ip_protocol': 'tcp',
|
||||
'from_port': 22,
|
||||
'to_port': 22,
|
||||
'cidr': '0.0.0.0/0',
|
||||
},
|
||||
{
|
||||
# ping
|
||||
'ip_protocol': 'icmp',
|
||||
'from_port': -1,
|
||||
'to_port': -1,
|
||||
'cidr': '0.0.0.0/0',
|
||||
}
|
||||
]
|
||||
os_conn.nova.security_group_rules.create(
|
||||
sg1.id, **rulesets[0]
|
||||
)
|
||||
os_conn.nova.security_group_rules.create(
|
||||
sg2.id, **rulesets[1]
|
||||
)
|
||||
sg1 = os_conn.nova.security_groups.create(sec_name[0], "descr")
|
||||
sg2 = os_conn.nova.security_groups.create(sec_name[1], "descr")
|
||||
rulesets = [{
|
||||
# ssh
|
||||
'ip_protocol': 'tcp',
|
||||
'from_port': 22,
|
||||
'to_port': 22,
|
||||
'cidr': '0.0.0.0/0',
|
||||
}, {
|
||||
# ping
|
||||
'ip_protocol': 'icmp',
|
||||
'from_port': -1,
|
||||
'to_port': -1,
|
||||
'cidr': '0.0.0.0/0',
|
||||
}]
|
||||
os_conn.nova.security_group_rules.create(sg1.id, **rulesets[0])
|
||||
os_conn.nova.security_group_rules.create(sg2.id, **rulesets[1])
|
||||
|
||||
# Remove default security group and attach SG_1 and SG2 to VMs
|
||||
self.show_step(5)
|
||||
|
||||
srv_list = os_conn.get_servers()
|
||||
for srv in srv_list:
|
||||
srv.remove_security_group(srv.security_groups[0]['name'])
|
||||
srv.add_security_group(sg1.id)
|
||||
srv.add_security_group(sg2.id)
|
||||
fip = openstack.create_and_assign_floating_ips(os_conn, instances)
|
||||
|
||||
# Check ping between VMs
|
||||
self.show_step(6)
|
||||
|
||||
ip_pair = dict.fromkeys(fip)
|
||||
for key in ip_pair:
|
||||
ip_pair[key] = [value for value in fip if key != value]
|
||||
|
|
|
@ -26,12 +26,12 @@ TestBasic = fuelweb_test.tests.base_test_case.TestBasic
|
|||
SetupEnvironment = fuelweb_test.tests.base_test_case.SetupEnvironment
|
||||
|
||||
|
||||
@test(groups=["plugins", 'dvs_vcenter_plugin', 'dvs_vcenter_smoke'])
|
||||
@test(groups=['plugins', 'dvs_vcenter_plugin', 'dvs_vcenter_smoke'])
|
||||
class TestDVSSmoke(TestBasic):
|
||||
"""Smoke test suite.
|
||||
|
||||
The goal of smoke testing is to ensure that the most critical features
|
||||
of Fuel VMware DVS plugin work after new build delivery. Smoke tests
|
||||
of Fuel VMware DVS plugin work after new build delivery. Smoke tests
|
||||
will be used by QA to accept software builds from Development team.
|
||||
"""
|
||||
|
||||
|
@ -42,7 +42,7 @@ class TestDVSSmoke(TestBasic):
|
|||
"""Check that plugin can be installed.
|
||||
|
||||
Scenario:
|
||||
1. Upload plugins to the master node
|
||||
1. Upload plugins to the master node.
|
||||
2. Install plugin.
|
||||
3. Ensure that plugin is installed successfully using cli,
|
||||
run command 'fuel plugins'. Check name, version of plugin.
|
||||
|
@ -59,19 +59,17 @@ class TestDVSSmoke(TestBasic):
|
|||
cmd = 'fuel plugins list'
|
||||
|
||||
output = self.ssh_manager.execute_on_remote(
|
||||
ip=self.ssh_manager.admin_ip,
|
||||
cmd=cmd)['stdout'].pop().split(' ')
|
||||
ip=self.ssh_manager.admin_ip, cmd=cmd
|
||||
)['stdout'].pop().split(' ')
|
||||
|
||||
# check name
|
||||
assert_true(
|
||||
plugin.plugin_name in output,
|
||||
"Plugin '{0}' is not installed.".format(plugin.plugin_name)
|
||||
)
|
||||
"Plugin '{0}' is not installed.".format(plugin.plugin_name))
|
||||
# check version
|
||||
assert_true(
|
||||
plugin.DVS_PLUGIN_VERSION in output,
|
||||
"Plugin '{0}' is not installed.".format(plugin.plugin_name)
|
||||
)
|
||||
"Plugin '{0}' is not installed.".format(plugin.plugin_name))
|
||||
self.env.make_snapshot("dvs_install", is_make=True)
|
||||
|
||||
@test(depends_on=[dvs_install],
|
||||
|
@ -85,7 +83,6 @@ class TestDVSSmoke(TestBasic):
|
|||
2. Remove plugin.
|
||||
3. Verify that plugin is removed, run command 'fuel plugins'.
|
||||
|
||||
|
||||
Duration: 5 min
|
||||
|
||||
"""
|
||||
|
@ -96,21 +93,17 @@ class TestDVSSmoke(TestBasic):
|
|||
cmd = 'fuel plugins --remove {0}=={1}'.format(
|
||||
plugin.plugin_name, plugin.DVS_PLUGIN_VERSION)
|
||||
|
||||
self.ssh_manager.execute_on_remote(
|
||||
ip=self.ssh_manager.admin_ip,
|
||||
cmd=cmd,
|
||||
err_msg='Can not remove plugin.'
|
||||
)
|
||||
self.ssh_manager.execute_on_remote(ip=self.ssh_manager.admin_ip,
|
||||
cmd=cmd,
|
||||
err_msg='Can not remove plugin.')
|
||||
|
||||
self.show_step(3)
|
||||
output = self.ssh_manager.execute_on_remote(
|
||||
ip=self.ssh_manager.admin_ip,
|
||||
cmd='fuel plugins list')['stdout'].pop().split(' ')
|
||||
|
||||
assert_true(
|
||||
plugin.plugin_name not in output,
|
||||
"Plugin '{0}' is not removed".format(plugin.plugin_name)
|
||||
)
|
||||
assert_true(plugin.plugin_name not in output,
|
||||
"Plugin '{0}' is not removed".format(plugin.plugin_name))
|
||||
|
||||
@test(depends_on=[dvs_install],
|
||||
groups=["dvs_vcenter_smoke"])
|
||||
|
@ -119,7 +112,7 @@ class TestDVSSmoke(TestBasic):
|
|||
"""Check deployment with VMware DVS plugin and one controller.
|
||||
|
||||
Scenario:
|
||||
1. Upload plugins to the master node
|
||||
1. Upload plugins to the master node.
|
||||
2. Install plugin.
|
||||
3. Create a new environment with following parameters:
|
||||
* Compute: KVM/QEMU with vCenter
|
||||
|
@ -130,7 +123,7 @@ class TestDVSSmoke(TestBasic):
|
|||
5. Configure interfaces on nodes.
|
||||
6. Configure network settings.
|
||||
7. Enable and configure DVS plugin.
|
||||
8 Configure VMware vCenter Settings.
|
||||
8. Configure VMware vCenter Settings.
|
||||
Add 1 vSphere clusters and configure Nova Compute instances
|
||||
on controllers.
|
||||
9. Deploy the cluster.
|
||||
|
@ -139,9 +132,13 @@ class TestDVSSmoke(TestBasic):
|
|||
Duration: 1.8 hours
|
||||
|
||||
"""
|
||||
self.show_step(1)
|
||||
self.show_step(2)
|
||||
self.env.revert_snapshot("dvs_install")
|
||||
|
||||
# Configure cluster with 2 vcenter clusters
|
||||
self.show_step(3)
|
||||
|
||||
cluster_id = self.fuel_web.create_cluster(
|
||||
name=self.__class__.__name__,
|
||||
mode=DEPLOYMENT_MODE,
|
||||
|
@ -150,22 +147,24 @@ class TestDVSSmoke(TestBasic):
|
|||
"net_segment_type": NEUTRON_SEGMENT_TYPE
|
||||
}
|
||||
)
|
||||
plugin.enable_plugin(
|
||||
cluster_id, self.fuel_web, multiclusters=False)
|
||||
plugin.enable_plugin(cluster_id, self.fuel_web, multiclusters=False)
|
||||
|
||||
# Assign role to node
|
||||
self.fuel_web.update_nodes(
|
||||
cluster_id,
|
||||
{'slave-01': ['controller']}
|
||||
)
|
||||
self.show_step(4)
|
||||
self.fuel_web.update_nodes(cluster_id, {'slave-01': ['controller']})
|
||||
|
||||
# Configure VMWare vCenter settings
|
||||
self.show_step(5)
|
||||
self.show_step(6)
|
||||
self.show_step(7)
|
||||
self.show_step(8)
|
||||
self.fuel_web.vcenter_configure(cluster_id)
|
||||
|
||||
self.show_step(9)
|
||||
self.fuel_web.deploy_cluster_wait(cluster_id)
|
||||
|
||||
self.fuel_web.run_ostf(
|
||||
cluster_id=cluster_id, test_sets=['smoke'])
|
||||
self.show_step(10)
|
||||
self.fuel_web.run_ostf(cluster_id=cluster_id, test_sets=['smoke'])
|
||||
|
||||
|
||||
@test(groups=["plugins", 'dvs_vcenter_bvt'])
|
||||
|
@ -179,7 +178,7 @@ class TestDVSBVT(TestBasic):
|
|||
"""Deploy cluster with DVS plugin and ceph storage.
|
||||
|
||||
Scenario:
|
||||
1. Upload plugins to the master node
|
||||
1. Upload plugins to the master node.
|
||||
2. Install plugin.
|
||||
3. Create a new environment with following parameters:
|
||||
* Compute: KVM/QEMU with vCenter
|
||||
|
@ -209,10 +208,13 @@ class TestDVSBVT(TestBasic):
|
|||
"""
|
||||
self.env.revert_snapshot("ready_with_9_slaves")
|
||||
|
||||
plugin.install_dvs_plugin(
|
||||
self.ssh_manager.admin_ip)
|
||||
self.show_step(1)
|
||||
self.show_step(2)
|
||||
plugin.install_dvs_plugin(self.ssh_manager.admin_ip)
|
||||
|
||||
# Configure cluster with 2 vcenter clusters and vcenter glance
|
||||
self.show_step(3)
|
||||
|
||||
cluster_id = self.fuel_web.create_cluster(
|
||||
name=self.__class__.__name__,
|
||||
mode=DEPLOYMENT_MODE,
|
||||
|
@ -227,7 +229,9 @@ class TestDVSBVT(TestBasic):
|
|||
)
|
||||
plugin.enable_plugin(cluster_id, self.fuel_web)
|
||||
|
||||
# Assign role to node
|
||||
# Assign roles to nodes
|
||||
self.show_step(4)
|
||||
|
||||
self.fuel_web.update_nodes(
|
||||
cluster_id,
|
||||
{'slave-01': ['controller'],
|
||||
|
@ -236,22 +240,26 @@ class TestDVSBVT(TestBasic):
|
|||
'slave-04': ['compute', 'ceph-osd'],
|
||||
'slave-05': ['compute', 'ceph-osd'],
|
||||
'slave-06': ['compute', 'ceph-osd'],
|
||||
'slave-07': ['compute-vmware', 'cinder-vmware']}
|
||||
)
|
||||
'slave-07': ['compute-vmware', 'cinder-vmware']})
|
||||
|
||||
# Configure VMWare vCenter settings
|
||||
self.show_step(5)
|
||||
self.show_step(6)
|
||||
self.show_step(7)
|
||||
|
||||
target_node_2 = self.fuel_web.get_nailgun_node_by_name('slave-07')
|
||||
target_node_2 = target_node_2['hostname']
|
||||
self.fuel_web.vcenter_configure(
|
||||
cluster_id,
|
||||
target_node_2=target_node_2,
|
||||
multiclusters=True
|
||||
)
|
||||
self.fuel_web.vcenter_configure(cluster_id,
|
||||
target_node_2=target_node_2,
|
||||
multiclusters=True)
|
||||
|
||||
self.show_step(8)
|
||||
self.fuel_web.verify_network(cluster_id, timeout=60 * 15)
|
||||
|
||||
self.show_step(9)
|
||||
self.fuel_web.deploy_cluster_wait(cluster_id, timeout=3600 * 3)
|
||||
|
||||
self.fuel_web.run_ostf(
|
||||
cluster_id=cluster_id, test_sets=['smoke'])
|
||||
self.show_step(10)
|
||||
self.fuel_web.run_ostf(cluster_id=cluster_id, test_sets=['smoke'])
|
||||
|
||||
self.env.make_snapshot("dvs_bvt", is_make=True)
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -41,7 +41,7 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
|
|||
return self.fuel_web.get_nailgun_node_by_name(name_node)['hostname']
|
||||
|
||||
def get_network_template(self, template_name):
|
||||
"""Get netwok template.
|
||||
"""Get network template.
|
||||
|
||||
param: template_name: type string, name of file
|
||||
"""
|
||||
|
@ -61,7 +61,7 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
|
|||
1. Upload plugins to the master node.
|
||||
2. Install plugin.
|
||||
3. Create cluster with vcenter.
|
||||
4. Set CephOSD as backend for Glance and Cinder
|
||||
4. Set CephOSD as backend for Glance and Cinder.
|
||||
5. Add nodes with following roles:
|
||||
controller
|
||||
compute-vmware
|
||||
|
@ -70,8 +70,8 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
|
|||
3 ceph-osd
|
||||
6. Upload network template.
|
||||
7. Check network configuration.
|
||||
8. Deploy the cluster
|
||||
9. Run OSTF
|
||||
8. Deploy the cluster.
|
||||
9. Run OSTF.
|
||||
|
||||
Duration 2.5 hours
|
||||
|
||||
|
@ -80,9 +80,12 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
|
|||
"""
|
||||
self.env.revert_snapshot("ready_with_9_slaves")
|
||||
|
||||
plugin.install_dvs_plugin(
|
||||
self.ssh_manager.admin_ip)
|
||||
self.show_step(1)
|
||||
self.show_step(2)
|
||||
plugin.install_dvs_plugin(self.ssh_manager.admin_ip)
|
||||
|
||||
self.show_step(3)
|
||||
self.show_step(4)
|
||||
cluster_id = self.fuel_web.create_cluster(
|
||||
name=self.__class__.__name__,
|
||||
mode=DEPLOYMENT_MODE,
|
||||
|
@ -101,29 +104,26 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
|
|||
|
||||
plugin.enable_plugin(cluster_id, self.fuel_web)
|
||||
|
||||
self.fuel_web.update_nodes(
|
||||
cluster_id,
|
||||
{
|
||||
'slave-01': ['controller'],
|
||||
'slave-02': ['compute-vmware'],
|
||||
'slave-03': ['compute-vmware'],
|
||||
'slave-04': ['compute'],
|
||||
'slave-05': ['ceph-osd'],
|
||||
'slave-06': ['ceph-osd'],
|
||||
'slave-07': ['ceph-osd'],
|
||||
},
|
||||
update_interfaces=False
|
||||
)
|
||||
self.show_step(5)
|
||||
self.fuel_web.update_nodes(cluster_id,
|
||||
{'slave-01': ['controller'],
|
||||
'slave-02': ['compute-vmware'],
|
||||
'slave-03': ['compute-vmware'],
|
||||
'slave-04': ['compute'],
|
||||
'slave-05': ['ceph-osd'],
|
||||
'slave-06': ['ceph-osd'],
|
||||
'slave-07': ['ceph-osd'],},
|
||||
update_interfaces=False)
|
||||
|
||||
# Configure VMWare vCenter settings
|
||||
self.show_step(6)
|
||||
|
||||
target_node_1 = self.node_name('slave-02')
|
||||
target_node_2 = self.node_name('slave-03')
|
||||
self.fuel_web.vcenter_configure(
|
||||
cluster_id,
|
||||
target_node_1=target_node_1,
|
||||
target_node_2=target_node_2,
|
||||
multiclusters=True
|
||||
)
|
||||
self.fuel_web.vcenter_configure(cluster_id,
|
||||
target_node_1=target_node_1,
|
||||
target_node_2=target_node_2,
|
||||
multiclusters=True)
|
||||
|
||||
network_template = self.get_network_template('default')
|
||||
self.fuel_web.client.upload_network_template(
|
||||
|
@ -138,21 +138,24 @@ class TestNetworkTemplates(TestNetworkTemplatesBase, TestBasic):
|
|||
logger.debug('Networks: {0}'.format(
|
||||
self.fuel_web.client.get_network_groups()))
|
||||
|
||||
self.show_step(7)
|
||||
self.fuel_web.verify_network(cluster_id)
|
||||
|
||||
self.show_step(8)
|
||||
self.fuel_web.deploy_cluster_wait(cluster_id, timeout=180 * 60)
|
||||
|
||||
self.fuel_web.verify_network(cluster_id)
|
||||
|
||||
self.check_ipconfig_for_template(cluster_id, network_template,
|
||||
networks)
|
||||
self.check_ipconfig_for_template(
|
||||
cluster_id, network_template, networks)
|
||||
self.check_services_networks(cluster_id, network_template)
|
||||
|
||||
self.fuel_web.run_ostf(cluster_id=cluster_id,
|
||||
timeout=3600,
|
||||
test_sets=['smoke', 'sanity',
|
||||
'ha', 'tests_platform'])
|
||||
self.check_ipconfig_for_template(cluster_id, network_template,
|
||||
networks)
|
||||
self.show_step(9)
|
||||
self.fuel_web.run_ostf(
|
||||
cluster_id=cluster_id,
|
||||
timeout=3600,
|
||||
test_sets=['smoke', 'sanity', 'ha', 'tests_platform'])
|
||||
|
||||
self.check_ipconfig_for_template(
|
||||
cluster_id, network_template, networks)
|
||||
|
||||
self.check_services_networks(cluster_id, network_template)
|
||||
|
|
Loading…
Reference in New Issue