In case of adding port on bond 'ether' type was set
for interface. This was root cause of the issue -
bond was processed as nic.
Closes-Bug: #1602817
Change-Id: I6a5136ce9ce5398aa6d55c795857769a7a41f7b0
This patch adds support to deploy Ironic with separate Neutron network
for provisioning baremetal instances.
* Add NetworkDeploymentSerializer100 as Ironic multitenancy is supported
from Newton, and we will backport this till stable/newton.
* Update network scheme generation to create 'vlan' baremetal network,
assign IPs to Ironic conductors from this network, make them
accessible from baremetal servers.
* Add new checkbox at 'Openstack Settings/Additional components' tab
which allows to define if separate provisioning network should be used
during deployment. This is a trigger to switch ironic deployment to
multitenancy case. If not selected old behaviour is kept, 'flat'
network is used. The checkbox is shown only when Ironic component is
enabled.
Change-Id: I861a8b3b046202526d6a856c9f2dca2cfaddc887
Related-Bug: #1588380
Nailgun use block of nodes in stop operation to reset
such nodes in discovery state. Also Nailgun used such
data to calculate count of nodes for notifications.
But Astute will not send info about nodes in case
of task deployment.
This patch exclude count of nodes in stop notification
to prevent misslining message about successful operation
for 0 nodes
Change-Id: I32da2ccce11b22378f58759703fc4a56e31fd993
Closes-Bug: #1672964
We are going to use provisioned cluster nodes as workers for
distributed task serialization. Package python-distributed
provides dask-worker for nailgun code execution. Other packages
are nailgun requirements.
Change-Id: I95b7682c64fe2eedb26fc80046909974cc792c91
Implements: blueprint distributed-serialization
Distributed serialization is implemented with python distributed
library. We have scheduler for jobs management and workers for
jobs processing. Scheduler is started on the master node as well
as set of workers on it. Also workers are started on all nodes.
In the cluster settings we can select the type of serialization
and nodes statuses that allows serialization on it. By default
nodes with status 'ready' are excluded from the workers list.
For data serialization we are using only nodes from the cluster
where serialization is performing.
Before the computation fresh nailgun code is sent to the workers
as zip file and it will be imported for job execution. So we always
have fresh nailgun code on the workers.
In one job we are processing chunks of tasks on the workers. This
approach significantly boosts performance. The tasks chunk size
is defined as settings.LCM_DS_TASKS_PER_JOB parameter.
For limiting memory consumption on the master node we use parameter
settings.LCM_DS_NODE_LOAD_COEFF for calculation max number of jobs
in the processing queue.
Synthetic tests of distributed serialization for 500 nodes with
nubmer of ifaces >= 5 performed on 40 cores (4 different machines)
took 6-7 minutes on average.
Change-Id: Id8ff8fada2f1ab036775fc01c78d91befdda9ea2
Implements: blueprint distributed-serialization
There were urls and handlers for vmware in the nailgun, that were
removed completely in
review.openstack.org/#/c/428402/15/nailgun/nailgun/api/v1/urls.py
The better approach is remain the urls and make special stub-handlers
instead of real ones.
Change-Id: I50bf740ec726c9cc57ff63d49aff718e812e6feb
Closes-Bug: #1668258
Doing this we avoid inclusion of task cache update statement into the next
transaction which may cause different problems such as deadlocks.
(update happens inside make_astute_message() function)
Change-Id: I865b98beb621bee089cf79f1304498fd3637d64f
Closes-Bug: #1618852
Now the deployment behaviour is driven by tags rather than role names, i.e. role
name cannot be relied upon.
Change-Id: Icfabeeb0b7fb6a9d697a09c3cf1fa020bbd4c323
Closes-Bug: #1669743
There is the possibility to change OpenStack config after deployment.
Changes could be applied per role. And in the case of multiple roles
on a node, many changes could pretend to be applied. I.e. if we have
a config for the role 'compute', a config for the role 'cinder' and
a node with role 'cinder+compute' we have to choose one of them.
Previous decision was 'sort it in the lexicographical order', so it
applies 'cinder' then --- 'compute'. It is contra intuitive. The best
option is applying to a node the last related config, to new config
overlaps old ones.
Change-Id: I7db388ca3baeb351adc9fdb70c55b0be50fafe48
Closes-bug: #1671521
There was the commit that removes vmware. It has alembic migrations,
but doesn't have any tests for that. This commit fixes that.
Change-Id: I66090b0a0d7bfbd8e2365ec027fabfefc9d612da
Closes-Bug: #1668249
On NIC: XL710 with driver 'i40e' MTU does not take into
account 4 bytes VLAN tag, so we should increase it manually
DocImpact
Change-Id: I3d95db9ec6fae4d8cd397c429d785dbdf1502b21
Partial-Bug: #1587310
Co-Authored-By: Fedor Zhadaev <fzhadaev@mirantis.com>
Since Fuel 10, Ceilometer and MongoDB services become experimental features.
* Ceilometer and MongoDB settings are shown on Settings tab in Fuel UI
only if "experimental" feature group is enabled
* MongoDB role is available if "experimental" feature group is enabled only
* Ceilometer option is removed from the cluster creation wizard
Implements: blueprint remove-ceilometer
Change-Id: I6df3b47c14cafb1544dfe034cd9a2c0ad14205be
Since Fuel 10, Murano service becomes an experimental feature.
* Murano settings are shown on Settings tab in Fuel UI
only if "experimental" feature group is enabled
* Murano option is removed from the cluster creation wizard
Implements: blueprint make-murano-experimental
Change-Id: I4dd0853138c045b8d7e8f6ff940c09250763a56b
Allow limiting the number of objects returned via GET
by providing "limit"
Example: api/notifications?limit=5
Allow offseting (skipping N first records) via "offset"
Example: api/notifications?offset=100
Allow ordering of objects by providing "order_by"
Example: api/notifications?order_by=-id
Add helper functions/classes to:
- get HTTP parameters (limit, offset, order_by)
- get scoped collection query by applying 4 operations
filter, order, offset, limit
- set Conent-Range header if scope limits are present
Make default NailgunCollection's GET utilize scoped query
This makes default (parent) GET of child handlers support paging
and ordering (overriden GET methods will not get this functionality
automatically)
NailgunCollection.GET is also an example of how to implement
this new functionality.
Helper functions/classes can be utilized in child handler methods
to implement filters / ordering / paging
Related-Bug: 1657348
Change-Id: I7760465f70b3f69791e7a0c558a26e8ba55c934a
Nodes can be exluded from deployment, if there is no tasks to run on this nodes.
Such nodes should not be switched to deployment state.
Change-Id: I4cd23769b7643aae7b149ba30e5b0e91a3021563
Add simple script to setup mysql and postgresql databases, this script
can be run by users during testing and will be run by CI systems for
specific setup before running unit tests. This is exactly what is
currently done by OpenStack CI in project-config.
This allows to change in project-config the python-db jobs to
python-jobs since python-jobs will call this script initially.
See also
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107784.html
Change-Id: Ib72322030d7dc6979380f74379893084610982a1
* min value was set in consts
* appropriate validator was added
* tests for validator were changed
* test for serializer were changed
Change-Id: Ib8ccb0658bd401ce492257f855013d1d7e0f2dac
Closes-Bug: #1653081
As workaround for decreasing loading in the UI on unread
notifications fetching we can mark all notifications as read.
For such purposes we add NotificationsMarkAllHandler
Change-Id: I2e6a0daaf8712ab3064df728a8fb463ef805aa06
Partial-Bug: #1657348
Doing this we avoid inclusion of task cache update statement into the next
transaction which may cause different problems such as deadlock.
In this particular case we've got the following deadlock:
1. DeleteIBPImagesTask makes UPDATE tasks SET cache....
2. Response handler in receiver makes SELECT clusters FOR UPDATE
3. The code following DeleteIBPImagesTask makes SELECT clusters FOR UPDATE
4. Response handler performs SELECTS tasks FOR UPDATE
Change-Id: Ic8e5f2386364421b0667f920499e031f587f726e
Closes-Bug: #1653083
For calculation of notifications statuses we made requests in the UI
and fetch all notifications data and process them on the UI side.
We want to replace a polling of the whole notification collection by
a polling of unread notifications number. This dramatically decrease
Fuel UI load in case of a big amount of notifications.
Change-Id: I8f83d4e2d7f58beaf06c489b2264ccb69f9927ce
Partial-Bug: #1657348