Add the options to detect and cleanup loadbalancer services which are
allocated in NSX but do not exist in Octavia.
The orphaned loadbalancer services prevents routers from being deleted
and therefore should be cleaned up prior to the router deletion.
Change-Id: Ic0ad5175214cff034bd76a16fc11dbea3ccd6b13
When Octavia loadbalancer hangs in PENDING status, it becomes immutable
and cannot be recovered or deleted in any way.
The following admin util commmand updates the loadbalancer status to
ERROR so it can be deleted successfully, but sending an RPC message to
Octavia driver agent service.
Change-Id: I7ee1ba445ab4526ef8b2d271574319114a31d631
- Support NSX 3.2 which force all the ports of a network to be migrated togather
- Improve documentation
- Optionally skip enabling & disabling the migration coordinator service
Change-Id: I91900f040e22c336e7b8cc13bc8ed2f30452c80e
- Detect internal networks for distributed routers as neutron networks
- No identical port fixed ips and address pairs allowed
- Only 2 dns nameservers are allowed per subnet
- Only 1 ipv6 subnet is allowed per network
- Improve logging: Add summery of all issues at the end, and add an option to write it to file
Change-Id: Id195f510c3915d80755ef656912efb21b51ff9ce
- Add an udmin utility to provide the mapping between compute ports
and vif-ids
- Add extension to the api replay mode to support setting it by the plugin
- Add the mapping file as a parameter to teh api replay and use it in the ports migration
- Remove post migration cleanup of ports
Change-Id: Icfd3ef9f8056ee9c602ac5e85345daa59309f602
For the distributed router edges to be migrated, we need to created their
internal virtualwire network on the nsx-t as well.
For that we added an admin utility to list the necessary networks ans their vni,
and also updated the api-replay to create those networks.
Change-Id: I183e48a0ab8fcbe04810fec94e5cce584abcec15
1. NSX|V admin utils: Add utility to list virtual wires
2. Add network vni field to the api_repaly extension
3. Let policy plugin set the vni value on the new segment
while working in api-replay mode.
Change-Id: I872edd03cdd1a7ff1422cdc12ea2a1d75b5d0bcb
Bump neutron-lib, and osc-lib, and update some more requirements
Depends-on: Ie74ea517a403e6e2a7a4e0a245dd20e5281339e8
Change-Id: If34a9889fb0f137856f7c241788cf593e722d665
- Add a utility that can run the pre-checks separatly with:
nsxadmin -r nsx-migrate-t2p -o validate
- Add the nsx version check
- Verify no DHCP relay config
- Add unit tests to the migration utilities
Change-Id: I49b7402c38ade40df97a2aabc84a41fe29f23731
Vsphere7 started to block this traffic so adding those rules to be
backwards compatible.
In addition, add admin utility to fix existing edge firewalls:
nsxadmin -r routers -o nsx-update-fw
Change-Id: Ia5c2832e377a1a17ef279191ee91b6fec8f65443
This patch will allow moving neutron from using the nsx_v3 plugin to the nsx_p plugin.
This includes:
- admin utility to move all resources to the policy api:
nsxadmin -r nsx-migrate-t2p -o import (--verbose)
This utility will:
-- Migrate all neutron used & created resource using the nsx migration api
-- roll back all resources in case it failed
-- post migration fix some of the policy resources to better match the expectation
of the policy plugin
- admin utility that will cleanup left overs in the nsx_v3 db:
nsxadmin -r nsx-migrate-t2p -o clean-all
(can be used, but everything should work without calling it as well)
- Some minor changes to the policy plugin and drivers to allow it to handle migrated resource
which are a bit different than those created with the policy plugin
-- Delete DHCP server config once a migrated network is deleted
-- Update LB L7 rules by their name suffix as their full display name is unknown
Change-Id: Ic17e0de1f4b2a2d95afa61ce33ffb0bc9e667b89
The admin utilities usually run with the default config files:
/etc/neutron/neutron.conf and /etc/neutron/plugins/vmware/nsx.ini
In order to run it with custom files you can use:
nsxadmin --config-file <neutron conf path> --config-file <nsx conf path>
Change-Id: I0c75f0a616d8016a840611edab1e3b3edb53c4ad
The loadbalancers using the router LB service will be marked on
a new tag on the NSX service.
Also adin an admin utility to update existing Lb services with the tag.
Change-Id: I6c38b45e4d683681a6915fd07ca296264c7d2495
IPv4 support for Policy DHCP depending on the NSX version & on config.
Including devstack support for configuration & cleanup, and admin utilility
for migration from MP implementation to Policy one.
IPv6 support will follow in a future patch.
Change-Id: I01bfb5bd530c63ca8b635bbebcac47659187077e
Before NSX 3.0 the passthrough api was used to update the admin state.
With NSX 3.0 it can be updated using the policy api.
In addition, adding a new admin utility to update this field when
upgrading to NSX 3.0
Change-Id: I4020c07db0f595b1f46014a409a585188c88454e
Adding a new configuration to let the admin control if the edge firewall
rule will see the external addresses or internal ones, thus controlling
the order of implementation.
The new parameter firewall_match_internal_addr is True by default
so it is backwards compatible.
In addition, adding an admin utility to change this flag across all
existing nat rules.
Depends-on: Ia34e42a94c10bd3f12ebc658939ed826af53658c
Change-Id: I29e7acc03bf6b845d9a727cf075cbe2b0609af34
The driver is loaded, then terminated whenever a request is issued.
This behavior causes termination of the Octavia listener which is
responsible to the processing of the driver status updates and
statistics processing.
The following change implements an agent which will execute the
listener.
Change-Id: I566aaa65df4ba7455577a539aa9eebb6cc36a099
Replace an old tier0 (that might have been deleted) with a new one
Usage:
nsxadmin -r routers -o update-tier0 --property old-tier0=<id>
--property new-tier0=<id>
Change-Id: I83200508b827586cb0a404f43ac7ec23966d1675
Admin util can set realization and purge cycle interval on
policy appliance
Depends-on: Ie60e3a04980ae9d6a747f80497168e923f119824
Change-Id: I91be76d8cd2741ec36f5f80529cd295a3ee6addb
Commit Ia4f4b335295c0e6add79fe0db5dd31b4327fdb54 removed all the
neutron-lbaas code from the master (Train) branch
Change-Id: I9035f6238773aad0591436c856550b7a5e01e687
To support the case of 2 instalations on teh same NSX backend,
The newer installation should reuse the default Os section & NS group.
Usage:
nsxadmin -r firewall-sections -o reuse
Change-Id: I0e187cea6ffa9ca3cdb6d215530426e611c8ae20
This patch:
- Updates git.openstack based URLs to use opendev.
- Cleans up the lower-constraints.txt file to only include what we
really need.
Change-Id: I3eecd97c313c33c820ca2be8f01f6848244cd52a
nsxadmin -r orphaned-firewall-sections -o nsx-list/clean will now
also detect/delete orphaned rules inside nsx sections that belong to
neutron security groups.
Change-Id: I7f733676e29f6a2b1177b4155e5b36aee3670438
Due to neutron bug, some metadata components in the various backend Edge
appliances are missing. The patch is supposed to address these
issues.
Admin util command can run per Edge, per AZ or for the whole cloud.
Cases handled by the utility:
- Existing metadata proxies' internal IP is different than the IPs which are
defined in the Edge's loadbalancer object.
This case can happen when the metadata proxies are recreated for some reason.
- Edge appliance is lacking the metadata network connectivity, and the
loadbalancer objects.
This case can happen while a router or a DHCP was created by the Neutron
parent process, which failed to initialize with metadata due to a bug.
- The Edge is missing the metadata firewall rules.
This case can happen while the first interface attachment to the router was
done in the Neutron parent process context due to the bug described above.
Command syntax:
Update AZ:
nsxadmin -r metadata -o nsx-update --property az-name=az123
Update single Edge appliance:
nsxadmin -r metadata -o nsx-update --property edge-id=edge-15
Update entire cloud:
nsxadmin -r metadata -o nsx-update
Change-Id: I77de9e0a0c627e43d3b1c95573d151e0414a34a9