Now the deployment behaviour is driven by tags rather than role names, i.e. role
name cannot be relied upon.
Change-Id: Icfabeeb0b7fb6a9d697a09c3cf1fa020bbd4c323
Closes-Bug: #1669743
On NIC: XL710 with driver 'i40e' MTU does not take into
account 4 bytes VLAN tag, so we should increase it manually
DocImpact
Change-Id: I3d95db9ec6fae4d8cd397c429d785dbdf1502b21
Partial-Bug: #1587310
Co-Authored-By: Fedor Zhadaev <fzhadaev@mirantis.com>
Refactor interface logic:
* remove interface_properties
* CRUD operations for NIC attributes
* default values for NIC meta and attributes
Change-Id: I26106f1b55c704a9e79d01fadc48c88a92ccc414
Implements: blueprint nics-and-nodes-attributes-via-plugin
This commit switching tasks resolution approach to the tags based one.
Tag - minimal unit what's necessary only for task resolution and can be
mapped to the node through the role interface only. Each role provides set
of tags in its 'tags' field and may be modified via role API. Tag may be
created separately via tag API, but, this tag can not be used unless it's
stuck to the role.
Change-Id: Icd78fd124997c8aafb07964eeb8e0f7dbb1b1cd2
Implements: blueprint role-decomposition
* restriction for using DPDK in VXLAN based
segmentation case was removed
* in case of using VXLAN with DDPDK used 'br-mesh'
bridge that has configuration as 'br-prv' and
vendor specific attribute 'vlan_id' was added
for 'add-br'
* appropriate tests were added
Implements blueprint vxlan-support-for-ovs-dpdk
Change-Id: I1c4978a15df6f851339a346fe6c4812c5427dd29
Currently, it's not possible to create duplicated tags for particular owner.
For, example you can not create tag with name 'test' for release. But, list of
tags available for specific cluster contains tag's provided by its release and
tags provided by its enabled plugins. So, it's possible to create so-called
tag for cluster and its release.
I introduced definition 'namespace' to avoid described cases. It's spaces
where all tags should have unique names.
I differentiate three kinds of tags namespaces:
* cluster namespace = #1 (cluster tags) +
#2 (tags of enabled plugins for this cluster) +
#3 (tags of the release what cluster belongs to)
* release namespace = #1 (release tags) +
#2 (tags of clusters created with this release) +
#3 (tags of enabled plugins for clusters from #2)
* plugin namespace = #1 (plugin tags) +
#2 (tags of clusters where plugin is enabled) +
#3 (tags of releases connected to clusters from #2)
Change-Id: I1ac9c0d30d9e4a070069b7d9ee3c6670f01802b8
Implements: blueprint role-decomposition
A 'tags' attribute has been added to each role in 'roles_metadata'.
Initially all non-controller roles will only have a tag of their own
role name. This will allow existing tasks which do not have tags
associated with them to work correctly. In the abscence of tags a
task's roles will be used to determine which nodes it will run on.
Implements: blueprint role-decomposition
Change-Id: I390580146048b6e00ec5c42d0adf995a4cff9167
The size of deployment_info grows as n^2 depending on
nodes number. That's because common_attrs, which is
merged into each node's contains info about all nodes.
For example for 600 nodes we store about 1Gb of data in
the database. So as first step let's store common_attrs
separately in deployment_info structure inside python
code and in the database.
Also removed old test for migrations, which are not related
to actual database state.
Change-Id: I431062b3f9c8dedd407570729166072b780dc59a
Partial-Bug: #1596987
Added support for node filters and node_transitions.
If one of node_transitions status is not specified,
the default will be used, where default is:
- on success: switch node to status ready
- on error: switch node to status error
- on stop: swith node to status stopped
Change-Id: I8b4d49dc1bada2479017697bf5858e85958579f2
Blueprint: graph-concept-extension
the following attributes was added:
- node_filter: YAQL expression to select nodes for applying the graph
- node_attributes_on_success: attributes which will be applied to node
if execution of graph completes successfully.
- node_attributes_on_fail: attributes which will be applied to node
if execution of graph fails.
- node_attributes_on_stop: attributes which will be applied to node
if execution of graph interrupts.
Blueprint: graph-concept-extension
Change-Id: I949b3971f62c17b1d243c8ed97e2802afa4e0cce
Added migration from Fuel 8.0 to 9.0, which adds default rule
to pick bootable disk in volume metadata of release.
Change-Id: I5d151a29bf52ac3a519049c38b2b671c087f968f
Closes-Bug: #1595209
This change introduces a new callback on_nodegroup_delete
which is called when a nodegroup is deleted.
It also adds a decorator that can be used to generate
before and after callbacks for any method.
Change-Id: Ia1c4ef3956175af6c223af854c9543cd781e8dbf
Blueprint: network-manager-extension
This patch:
* extends current DB model with new entities and provides
related migrations.
* extends plugin sync method to support storing new plugins
attributes.
* provides cosmetic fix for ClusterPlugin model. Lets write table
names in the plural but model names in the singular.
Change-Id: I3edbde1d48461ce3fab7c93f17e2db5332b1f7fb
Implements: blueprint nics-and-nodes-attributes-via-plugin
Adding this flag to specified disk allows user to choose bootable disk
from Fuel UI and CLi.
This change is very important for deployments with multipath connected
block devices. Mainly, it is because, order of disks in UI can change
easly in this case, so we need introduce to user a possibility to
choose bootable disk explicitly.
Change-Id: I22ffe9104d2ec5a6598d496691fffa0087111070
Partial-Bug: #1567450
Since there are a couple of places where models for
restrictions is initialized, it's moved to cluster object.
Also, comments from previous commit
(Ibba7951968cbafd59fff0d516e74f9dd9e454edc) are fixed
It's refactoring bug is not needed.
Change-Id: Ic499a5deefb12740ebedc630b024dae0b4248ec5
Calls to network manager from object methods have been
moved into callbacks implemented by the network manager
extension.
This commit makes the following new callbacks available
to extensions:
* on_cluster_create: called when a new cluster is created
* on_cluster_patch_attributes: called when a cluster's attributes
are updated
* on_nodegroup_create: called when a node group is created
* on_before_deployment_serialization: called before deployment
serialization begins
* on_before_provisioning_serialization: called before provisioning
serialization begins
* on_remove_node_from_cluster: called when a node is removed from
a cluster
Blueprint: network-manager-extension
Change-Id: I9a3413f54c881edd098e623ea204d12a86695f87
When Release, Plugin or Cluster is deleted, related
deployment graphs are deleted as well.
Note, that current DeploymentGraph deletion schema works if
DeploymentGraph have only relation to parent
otherwise unwanted relations may be affected by graph cleanup.
Change-Id: If489879a3d4ca01ba2335dd279136c57e1bad171
Closes-Bug: #1567471
Closes-Bug: #1557632
Currently root plugin attributes values inconsistent with
specific plugin version attributes provided by client. This patch
fills plugin attributes with proper values for specific plugin version.
Change-Id: I1c85d6e080f8fd16d5b65c1bf670fdfb3ba0ff1b
Closes-Bug: #1573440
/openstack-config/execute/ handler now support
?graph_type=my-graph-name parameter
Change-Id: Iaed6af093f0e2a66db29d2185104bc1e8c80fad2
Partial-Bug: #1567504
The new option 'propagate_task_deploy' was added to cluster.
this option allows to use legacy task adapatation algorithm
to make tasks from granular deployment LCM ready on the flight.
Also the same aproach is used for adaptation legacy plugin tasks.
Change-Id: Ib212bd906acc0e6915e3c14e4741b306bdedaa98
Closes-Bug: 1572276
nailgun.errors have a huge set of exceptions but without hierarchy. This
patch remove exception generation from dict and make it explicitly with
python classes and add some exceptions hierarchy. Now all network errors
inherit from NetworkException and same for other exceptions.
Change-Id: I9a2c6b358ea02a16711da74562308664ad7aed97
Closes-bug: #1566195
Replacing map with for loops or list comprehensions in areas
where they where used improper.
Change-Id: I094abb3c6ec3c041f0fff6aed0c456158e1cc8f7
Closes-Bug: 1567849
In default setup we should not assign public ip to all nodes.
Just to ones with controller role. So we filter out public ip
network for nodes that should not have it.
Change-Id: I2a9ea4d06cc1ba15bad20b817659b7539827472a
Closes-Bug: 1415552
Removed separate logic to check that network modifications
are allowed, start to using the property cluster.is_locked
Change-Id: If23037731b6764c7e8c5c1243eeda7b8e6ee9c3e
Closes-Bug: 1568780
All network-related objects have been moved into the
network_manager extension and import paths have been updated.
Blueprint: network-manager-extension
Change-Id: I6e16df86a58d6192d312e8e8955ed38912d2b059
This moves the files for NetworkManager and its sub-classes into
a new extension. All import paths have been updated.
Blueprint: network-manager-extension
Change-Id: Icc2410fd9c411a47a3dee4573d4ef6f1a039c303