VMware not supported since Fuel 10. So we should stop test it.
Change-Id: I5996520ded3419fd2ce2cb1e76056eed157bfffb
Implements: blueprint remove-vmware
The feature was imlemented in nova in liberty release cycle
but it requires qemu 2.5 and libvirt 1.2.19.
Now we updated packages and feature is able to use.
This test is executing base check for the feature.
Change-Id: I5c8dc2471e1f66df221be5f554c30704b7aac479
Test scenario 1:
1. Revert snapshot "ready_with_3_slaves"
2. Create env with 3 controller nodes and 1 compute+cinder node
3. Provision nodes
4. Upload two simple graphs
5. Make snapshots for next tests and resume snapshot
6. Execute graphs
7. Check that graph tasks was executed and
finished without any errors
8. Check the created by graph tasks file
Test scenario 2:
1. Revert snapshot "extension_graph_prepare_env"
2. Execute graphs via API
3. Check that graph tasks was executed and
finished without any errors
4. Check the created by graph tasks file
Change-Id: I45849d680c1224771ac8db61827ac56950c903f4
Closes bug: 1619638
Blueprint: graph-concept-extension
Add two tests with the following steps:
Deploy environment with enabled DMZ network for API.
Scenario:
1. Revert snapshot with ready master node
2. Create new environment
3. Run network verification
4. Deploy the environment
5. Run network verification
6. Run OSTF
7. Reboot cluster nodes
8. Run OSTF
9. Create environment snapshot deploy_env_with_public_api
Check that security rules are properly applied for DMZ network
Scenario:
1. Revert snapshot from previous test
2. Run instance
3. Try to access horizon from instance
4. Remove instance
Implements: blueprint test-separate-public-floating
Change-Id: I70474b5cab324aa4f4a042127d4e6961c95010bf
There are new handlers in fuel cli which are
intented to utilize CRUD operations of
ConfigDB.
Work with values(levels and overrides) is verified
Change-Id: I531733b296765b0d472644fbe3739e03c0475fca
Closes-bug: #1616047
Test cases:
1. Reset and redeploy cluster with vCenter after succesfull deployment
2. Stop and redeploy cluster with vCenter with new parameters
3. Redeploy cluster with vCenter after failed deployment
Closes-Bug: #1612578
Change-Id: Ia0de031e9c6d7a72f126a28b8e150496350c5a87
1. Drop empty fuelweb_test/helpers/exceptions.py
2. Drop unused anymore fuelweb_test/helpers/http.py
3. Drop execute_throw_remote, run_in_remote, run_on_remote_get_results,
execute_remote_cmd.
4. Drop unused private deserializers from SSHManager
5. Fix misprint in deprecation message
Change-Id: I56a220e49f44a4f22b2b3499acf022fef923b323
Next upgrade refactor step:
now its possible to move each test suite to separate file.
This refactor will simplify the future changes.
Change-Id: Id0efc1521bcf00bc2888cf5300ba17a25173ceca
1. test_os_upgrade is moved under test_os_upgrade
2. main code is moved to separate place for excluding copy-paste
in new tests.
3. Fixed logging due to unreadable output
4. Switch to check_call -> remove pain of cherry-picking
5. Use octane-cleanup
Closes-bug: #1612236
Change-Id: Ida7f5901f070a9ef507ce6027fd2618b8617d89f
(cherry picked from commit a9f7dd2)
Next upgrade refactor step: now its possible to move each test suite to separate file.
This refactor will simplify future changes.
Change-Id: I7587dfe038c9ca9a64e12b721b8cf83e66d60ddc
Since we are not testing detach-db plugin its time to switch to example plugin
- Move plugin test to separate file
- Replace detach-db plugin with example_v3 plugin
- Download the plugin by URL passed via EXAMPLE_V3_PLUGIN_REMOTE_URL env var
Change-Id: I99217686be856acd8e69e93e5dd3ae40cbb1f15b
Closes-bug:1603415
For executing chain Fuel upgrades and reusing of existing code we need to
create some tool for configurable upgrade. For now it configures via
env variables but later it can be tuned via yaml files.
Change-Id: I4fc2281801ad2a02c63dc0eb92941aadca8b9d33
1) Need to use fuel-devops >=3.0.0
2) Need to use centos_master.yaml devops template
3) export CENTOS_MASTER=True environment variable
4) provide path to CENTOS_CLOUD_IMAGE_PATH, FUEL_RELEASE_PATH and
EXTRA_DEB_REPOS
Change-Id: I1542c2238abc364713f02e4bca6ec7646883bf78
Closes-Bug: #1592419
The problem is that after third reboot nodes are going
into maintenance mode and became unavailable for further
testing. We need disable UMM feature to prevent such behaviour.
Change-Id: I1cce936201872f47d13e3c482e23e1ba4cfc24b2
Closes-Bug: #1588877
- Add test for basic Murano validation in plugin installation
- Fix test names in Murano-related box-install tests since Murano
using GLARE backend as default now.
- Turn off SSL deployment for Murano due bug: #1590633
Change-Id: Iac71f4706db4b8eb67a7f98b2932fbb31032bc9f
Related-Bug: #1584791
Targets: blueprint murano-fuel-plugin
Split code across several files
Move file changes parsing actions to content_parser.py
Add 'BaseGerritClient' class
Move specific actions related to requesting data from the server
from 'FuelLibraryModulesProvider' class to GerritClient class
Add 'TemplateMap' class to gerrit_info_provider.py
Clean 'FuelLibraryModulesProvider' class pulling unrelated stuff out
Add rules.py with parsing rules
Add new handling rules for osnailyfacter/{manifests,templates}
Add new handling rules for openstack_tasks/{manifests,examples,lib/facter}
Remove gathering modules from dependent/related reviews
Register new files in doc/helpers.rst
Change-Id: I7480ac712ff6a8467ec0ddff3779f4f2dba716ce
Related-Bug: #1583045
Script collects SWARM test results from Jenkins for all subbuilds,
gets bugs and test info for all observer failed tests and
do grouping such tests by failure reasons which were found during
analysis across all failed tests across all subbuilds. Finally it
generates a html report.
Implements blueprint: fuel-qa-failure-reason-grouping
Change-Id: Ie6955a206ce72d756a9700a204a3123ab4b10997
- Add test with external load balancer. Controllers are
from different racks and haproxy is from rack-3
- Separate devops config with appropriate networks
assigned to nodes is used
- Local repos for cluster are used because public networks
are routed without internet connection
- OSTF isn't running because it's not implied to use
separate haproxy
Closes-Bug: #1583530
Change-Id: I0d3647c8eb13159c27e64ddf5925467f451b610c
This patch implements tests for Unlock Settings
Tab Feature with randomly changed settings.
Scenario:
1. Load clusters' configurations from the file
2. Revert snapshot with appropriate nodes count
3. Create a cluster from config
4. Update nodes accordingly to the config
5. Deploy the cluster
6. Get cluster attributes
7. Modify randomly cluster attributes
8. Add if it's needed ceph nodes
9. Update cluster attributes with changed one
10. Redeploy cluster
11. Run OSTF
12. Go to the next config
Duration xxx m
Snapshot will be maked for all failed configurations
Change-Id: I3376dc29cf8083ead742384725e3e0a10dae2b34
- Do not use RH_* variables for images
- Make some methods paramerized to be used in different tests
- Add basic deployment test with OL compute
Change-Id: I86e616640ff4a162b74ee0da74917f06105b7a39
Related-Bug: #1569374
Add tests for Unlock Settings Tab Feature for
partially deployed cluster and for failed
deployment.
Check unlock settings tab after partially deploy
Scenario:
1. Revert snapshot ready_with_3_slaves
2. Create a new env
3. Add controller and 2 computes
4. Provision nodes without deploy
5. Select some nodes (not all) and deploy them
6. Download current settings and modify some of them
7. Upload changed settings
8. Re-deploy cluster
9. Run OSTF
10. Make snapshot
Duration 90m
Snapshot partially_deployed_unlock
Check unlock settings tab after failed deploy
Scenario:
1. Revert snapshot ready_with_3_slaves
2. Create a new env
3. Add controller and 2 computes
4. Change netconfig task to fail deploy
5. Deploy the env
6. Download current settings and modify some of them
7. Upload changed settings
8. Change netconfig task to normal state
9. Re-deploy cluster
10. Run OSTF
11. Make snapshot
Duration 60m
Snapshot failed_deploy_unlock
Change-Id: I84759ae0a603c4749dce2ac9b36c3a3c822514ee