Commit Graph

62 Commits

Author SHA1 Message Date
Edward Hope-Morley 36bf1df46d Partially revert "Do not link up HA router gateway in backup node"
This partially reverts commit c52029c39a.

We revert everything except one minor addition to
neutron/agent/l3/ha_router.py which ensures that ha_confs path is
created when the keepalived manager is initialised.

Closes-Bug: #1965297
Change-Id: I14ad015c4344b32f7210c924902dac4e6ad1ae88
2022-05-24 11:24:30 +00:00
Rodolfo Alonso Hernandez d73ec5000b [L3] Fix "NDPProxyAgentExtension.ha_state_change" call
The parameter "data" passed to the method "ha_state_change" is not
a router but a dictionary with "router_id" info.

The method "NDPProxyAgentExtension._process_router" requires the
router ID and the "enable_ndp_proxy" value, stored in the agent
router cache.

Closes-Bug: #1967839
Related-Bug: #1877301
Change-Id: Iab163e69f7e3641e2e1a451374231b6ccfa74c3e
2022-04-08 16:36:00 +00:00
Rodolfo Alonso Hernandez c20f2e5136 [HA] Do not add initial state change delay in HA router
The initial state ("primary", "backup") should be set immediately.
in [1], a transition delay to "primary" was introduced. This delay
is unnecesary when the first state happens.

Closes-Bug: #1945512

[1]https://review.opendev.org/q/I70037da9cdd0f8448e0af8dd96b4e3f5de5728ad

Change-Id: Ibe9178c4126977f1321e414676d67f28e5ec9b57
2021-10-04 14:28:22 +00:00
Slawek Kaplonski 82fd968011 [L3HA] Add extra logs to the process of ha state changes
Some extra debug logs may be useful to understand exactly what happens
during ha states transitions and e.g. to understand failures like
described in the related bug.

Related-bug: #1939507
Change-Id: Id708b2c7a602df8d4ba1b32e58d4b152b5c58ba6
2021-08-12 16:48:57 +02:00
Slawek Kaplonski ce8361a667 [L3HA] Bind metadata haproxy to IPv6 address if IPv6 is enabled
Patch [1] added possibility that haproxy spawned as metadata proxy in
the router's namespace can be bound to IPv6 address.
We misssed to add the same for the HA router, so when router was
switched to be active on the node, L3 agent starts haproxy for that
router but it was always bound to IPv4 address only.
That patch fixes it by adding check if IPv6 is enabled on host, and if
yes, it adds same config to the haproxy like it is in non-ha mode.

[1] https://review.opendev.org/c/openstack/neutron/+/715483

Closes-Bug: #1915495
Change-Id: Ie97cfe9fe0020929d9a1736d55ad92a5bd643072
2021-02-18 08:55:08 +01:00
Brian Haley 055036ba2b Improve terminology in the Neutron tree
There is no real reason we should be using some of the
terms we do, they're outdated, and we're behind other
open-source projects in this respect. Let's switch to
using more inclusive terms in all possible places.

Change-Id: I99913107e803384b34cbd5ca588451b1cf64d594
2020-08-19 16:47:53 -04:00
LIU Yulong c52029c39a Do not link up HA router gateway in backup node
L3 router will set its devices link up by default.
For HA routers, the gateway device will be pluged
in all scheduled hosts. When the gateway deivce is
up in backup node, it will send out IPv6 related
packets (MLDv2) according to some kernal config.
This will cause the physical fabric think that the
gateway MAC is now working in the backup node. And
finally the master node L3 traffic will be broken.

This patch sets the backup gateway device link down
by default. When the VRRP sets the master state in
one host, the L3 agent state change procedure will
do link up action for the gateway device.

Closes-Bug: #1859832
Change-Id: I8dca2c1a2f8cb467cfb44420f0eea54ca0932b05
2020-03-25 16:09:42 +08:00
Rodolfo Alonso Hernandez 3f022a193f Delay HA router transition from "backup" to "master"
As described in the bug, when a HA router transitions from "master" to
"backup", "keepalived" processes will set the virtual IP in all other
HA routers. Each HA router will then advert it and "keepalived" will
decide, according to a trivial algorithm (higher interface IP), which
one should be "master". At this point, the other "keepalived" processes
running in the other servers, will remove the HA router virtual IP
assigned an instant before

To avoid transitioning some routers form "backup" to "master" and then
to "backup" in a very short period, this patch delays the "backup" to
"master" transition, waiting for a possible new "backup" state. If
during the waiting period (set to the HA VRRP advert time, 2 seconds
default) to set the HA state to "master", the L3 agent receives a new
"backup" HA state, the L3 agent does nothing.

Closes-Bug: #1837635

Change-Id: I70037da9cdd0f8448e0af8dd96b4e3f5de5728ad
2019-08-27 16:47:00 +00:00
Rodolfo Alonso Hernandez 8b7d2c8a93 Refactor the L3 agent batch notifier
This patch is the first one of a series of patches improving how the L3
agents update the router HA state to the Neutron server.

This patch partially reverts the previous patch [1]. When the batch
notifier sends events, it calls the callback method passed during the
initialization, in this case AgentMixin.notify_server. The batch
notifier spawns a new thread in charge of sending the notifications and
then wait the specified "batch_interval" time. If the callback method is
not synchronous with the notify thread execution (what [1] implemented),
the thread can finish while the RPC client is still sending the
HA router states. If another HA state update is received, then both
updates can be executed at the same time. It is possible then that a new
router state can be overwritten with an old one still not sent or
processed.

The batch notifier is refactored, to improve what initally was
implemented [2] and then updated [3]. Currently, each new event thread
can update the "pending_events" list. Then, a new thread is spawned to
process this event list. This thread decouples the current execution
from the calling thread, making the event processing a non-blocking
process.

But with the current implementation, each new process will spawn a new
thread, synchronized with the previous and new ones (using a
synchronized decorator). That means, during the batch interval time, the
system can have as many threads waiting as new events received. Those
threads will end secuentially when the previous threads end the batch
interval sleep time.

Instead of this, this patch receives and enqueue each new event and
allows only one thread to be alive while processing the event list. If
at the end of the processing loop new events are stored, the thread will
process then.

[1] I3f555a0c78fbc02d8214f12b62c37d140bc71da1
[2] I2f8cf261f48bdb632ac0bd643a337290b5297fce
[3] I82f403441564955345f47877151e0c457712dd2f

Partial-Bug: #1837635

Change-Id: I20cfa1cf5281198079f5e0dbf195755abc919581
2019-08-01 17:11:04 +00:00
LIU Yulong 426a5b2833 Adjust some HA router log
In case router is concurrently deleted, so the HA
state change LOG is not necessary. It sometimes
makes us confusing.
Also print the log for the pid of router
keepalived-state-change child process.

Change-Id: Id57dd787c254994af967db17647a3a28925714da
Related-Bug: #1798475
2019-07-03 04:50:45 +00:00
LIU Yulong 0f471a47c0 Async notify neutron-server for HA states
RPC notifier method can sometimes be time-consuming,
this will cause other parallel processing resources
fail to send notifications in time. This patch changes
the notify to asynchronous.

Closes-Bug: #1824911
Change-Id: I3f555a0c78fbc02d8214f12b62c37d140bc71da1
2019-05-10 15:37:27 +00:00
Boden R 9bbe9911c4 remove neutron.common.constants
All of the externally consumed variables from neutron.common.constants
now live in neutron-lib. This patch removes neutron.common.constants
and switches all uses over to lib.

NeutronLibImpact

Depends-On: https://review.openstack.org/#/c/647836/
Change-Id: I3c2f28ecd18996a1cee1ae3af399166defe9da87
2019-04-04 14:10:26 -06:00
Slawek Kaplonski 66eb1e29f3 Enable ipv6_forwarding in HA router's namespace
When HA router is created in "stanby" mode, ipv6 forwarding is
disabled by default in its namespace.
But when router is transitioned to be "master" on node, ipv6
forwarding should be enabled. This was fine for routers with
configured gateway but we somehow missed the case when router don't
have gateway configured.
Because of that missing ipv6 forwarding setting in such case, IPv6
W-E traffic between 2 subnets was not working fine in L3 HA case.

This patch fixes it by adding configuring ipv6_forwarding on
"all" interface in router's namespace always, even if it don't have
gateway configured.

Change-Id: I8b1b2b426f7a26a4b2407a83f9bf29dd6e9ba7b0
CLoses-Bug: #1818224
2019-03-15 14:30:23 +00:00
Slawek Kaplonski 6ae228cc2e Spawn metadata proxy on dvr ha standby routers
In case when L3 agent is running in dvr_snat mode on compute node,
it is like that e.g. in some of the gate jobs, it may happen that
same router is scheduled to be in standby mode on compute node and
on same compute node there is instance connected to it.
So in such case metadata proxy needs to be spawned in router namespace
even if it is in standby mode.

Change-Id: Id646ab2c184c7a1d5ac38286a0162dd37d72df6e
Closes-Bug: #1817956
Closes-Bug: #1606741
2019-03-03 22:37:23 +00:00
Brian Haley 66e4a89ba1 Read ha_state file only once
check_ha_state_for_router() can potentially read the
ha_state file three times, since ha_state is a defined
as a property in the HaRouter() class.  Read ha_state
into a local variable and use it instead.

Change-Id: Icabe0baf961abe4ddd0699716f26dc96696eb8b1
2018-05-01 20:59:52 +00:00
Brian Haley 922cd0a938 Change ha_state property to always return a value
Right now, ha_state could return any value that is in
the state file, or even '' if the file is empty.  Instead,
return 'unknown' if it's empty.

We also need to update the translation map in the HA code
to deal with this new value to avoid a KeyError.

Related-bug: #1755243

Change-Id: I94a39e574cf4ff5facb76df352c14cbaba793e98
2018-04-17 14:23:23 +00:00
Cao Xuan Hoang d7e93c52bf Add a new method ha_state_change to L3 agent extension
This is needed by VPNaaS agent extension and other advanced
services in case they support L3 HA router.

Change-Id: Ice1b1c2ca97f47312e37379106ed5a6580f100dc
Needed-By: I0b86c432e4b2210e5f2a73a7e3ba16d10467f0f2
Related-Bug: #1692128
2017-10-18 09:41:38 +07:00
Inessa Vasilevskaya 7322bd6efb Make code follow log translation guideline
Since Pike log messages should not be translated.
This patch removes calls to i18n _LC, _LI, _LE, _LW from
logging logic throughout the code. Translators definition
from neutron._i18n is removed as well.
This patch also removes log translation verification from
ignore directive in tox.ini.

Change-Id: If9aa76fcf121c0e61a7c08088006c5873faee56e
2017-08-14 02:01:48 +00:00
Jenkins fc904313b7 Merge "Make the HA router state change notification more faster" 2017-07-07 21:00:08 +00:00
Hunt Xu a15c849563 ProcessManager: honor run_as_root when stopping process
Without this commit, the run_as_root parameter is always True when
stopping a process, which leads to the usage of unnecessary sudo such as
in some functional tests, like the keepalived ones.

This commit fixes the aforemetioned problem by taking run_as_root into
account when stopping a process. However, run_as_root will still always
be True if the process is spawned in a netns.

Closes-Bug: #1491581

Change-Id: Ib40e1e3357b9a38e760f4e552bf615cdfd54ee5a
Signed-off-by: Hunt Xu <mhuntxu@gmail.com>
2017-04-22 15:23:59 +08:00
Daniel Alvarez 676a3ebe2f Disable RA and IPv6 forwarding on backup HA routers
Neutron does not disable ipv6 forwarding for HA routers and it's
enabled by default in all router namespaces. For ipv6, this means
that it will automatically join the following groups:

* link-local all-routers multicast group (ff02::2)
* interface-local all routers multicast group (ff01::2)
* site-local all routers multicast group (ff05::2))

As a side effect it will answer to multicast listener queries, thus
causing external switch to learn its MAC address and disrupting traffic
to the master instance.

This patch will enable ipv6 forwarding on the gateway interface only
for master instances and disable it otherwise to fix the issue.

Also, the accept_ra procfs entry was enabled under certain
circumstances but it wasn't disabled otherwise. This patch, will
disable RA on the gateway interface for non master instances.

Closes-Bug: #1667756

Change-Id: I9bc890b43f750cad68fc67f4c79f1426c3506863
2017-03-17 15:06:08 +00:00
Robert Li bb3c0e8285 Add PD support in HA router
The following enhancements are added:
  -- PD keeps track of status of neutron routers: active or
     standalone (master), or standby (not master),
  -- PD DHCP clients are only spawned in the active router. In the
     standby router, PD keeps track of the assigned prefixes, but
     doesn't spawn DHCP clients.
  -- When switchover occurs, on the router becoming standby, PD
     clients are "killed" so that they don't send prefix withdrawals
     to the DHCP server. On the router becoming active, PD spawns DHCP
     clients with the assigned prefixes configured as hints in the
     DHCP client's configuration

Closes-Bug: #1651465
Change-Id: I17df98128c7a88e72e31251687f30f569df6b860
2017-03-15 04:31:09 +00:00
AKamyshnikova 1927da1bc7 Add check for ha state
If all agents are shown as a standby it is possible changing state
were lost due to problems with RabbitMQ. Current change adds check
for ha state in fetch_and_sync_all_routers. If state is different -
 notify server that state should be changed.

Also change _get_bindings_and_update_router_state_for_dead_agents
to set standby for dead agent only in case we have more than one
active.

Change-Id: If5596eb24041ea9fae1d5d2563dcaf655c5face7
Closes-bug:#1648242
2017-01-13 06:39:52 +00:00
Gary Kotton 0e8b32b03b L3-HA: remove unused deprecated code
Commit 2823c2e569 added the
deprecation warning but no one uses the code. So the unused code is
deleted.

Change-Id: I22d96182a7c88f725d19559c25f1820a8fb176f2
2016-11-07 01:34:20 -08:00
Gary Kotton dbbbe595f4 Use ensure_tree from oslo_utils.fileutils
Make use of the common oslo method to implement ensure_dir.

TrivialFix

Change-Id: Ia9e4c581664235476f290a4b651c5a24017ce357
2016-11-05 00:00:31 -07:00
LIU Yulong e795a3fcf8 Make the HA router state change notification more faster
HA router state change takes too much time to notify neutron server.
It takes almost 16s, by default ha_vrrp_advert_int 2s, for a single
HA router state change.

In this 16s time, assuming that a HA router meets 8 times HA router
state change. After this 16s, the first change dequeue and notify the
neutron server, then the 2nd, 3rd, and so on. Things are now becoming
interesting, after this 16 seconds if you run
`neutron l3-agent-list-hosting-router ha_router_id`, you may see the
router state in one specific agent is alternatively changing in active
and standby. It's not stay in the real state, because of the delay
notification.

This patch sets the BatchNotifier interval to ha_vrrp_advert_int (default
2s) to make the HA router state change notification more faster.

NOTE: the BatchNotifier event queue is needed, because the HA router state
change needs to be sent in a proper order. Then the neutron server could set
the HA state properly.

Closes-Bug: #1612069
Change-Id: Ife687038d31bd1e1ee264ff8b6ae1264fdd05489
2016-10-13 10:00:25 +08:00
Aradhana Singh 2823c2e569 Refactoring config options for l3 ha agent opts
Refactoring l3 ha agent options to be in neutron/conf/agent/l3.
This would allow centralization of all configuration options and
provides an easy way to import.

Partial-Bug: #1563069
Change-Id: I2d6bd6beb0d1658baf88c49b954d2db3136e0c8d
2016-09-30 15:00:42 -05:00
venkata anil 70ea188f5d New option for num_threads for state change server
Currently max number of client connections(i.e greenlets spawned at
a time) opened at any time by the WSGI server is set to 100 with
wsgi_default_pool_size[1].

This configuration may be fine for neutron api server. But with
wsgi_default_pool_size(=100) requests, state change server
is creating heavy cpu load on agent.
So this server(which run on agents) need lesser value i.e
can be configured to half the number of cpu on agent

We use "ha_keepalived_state_change_server_threads" config option
to configure number of threads in state change server instead of
wsgi_default_pool_size.

[1] https://review.openstack.org/#/c/278007/

DocImpact: Add new config option -
ha_keepalived_state_change_server_threads, to configure number
of threads in state change server.

Closes-Bug: #1581580
Change-Id: I822ea3844792a7731fd24419b7e90e5aef141993
2016-09-23 17:07:12 +00:00
Hong Hui Xiao 347778a9f9 Enable ra on gateway when add gateway to HA router
Now the 'accept_ra' will only be configured when HA router change
state to 'master'. If the router gateway is added after router state
change, the gateway port in 'master' HA router will not be configured.

This patch will configure 'accept_ra' for the 'master' HA router.

Change-Id: Ice1f3e6e48597ea8c366e243c2ca1771ea9b7770
Closes-bug: #1585246
2016-08-17 05:50:53 -06:00
Akihiro Motoki 2d8632e412 Use _ from neutron._i18n
Partial-Bug: #1520094
Change-Id: I874a4aa1d71d1f7034a1ff0b7450b419ef5c6864
2015-12-06 19:39:04 +09:00
Doug Wiegley dd726ed494 Move i18n to _i18n, as per oslo_i18n guidelines
- This does NOT break other projects that rely on neutron.i18n,
  as this change includes a debtcollector shim to maintain those
  older entry points, until they can migrate.
- Also updates _i18n.py to the latest pattern defined by oslo_i18n
- Guidance and template are from the reference:
  http://docs.openstack.org/developer/oslo.i18n/usage.html

Partially-Closes-Bug: #1519493
Change-Id: I1aa3a5fd837d9156da4643a367013c869ed8bf9d
2015-12-01 19:29:10 -07:00
Hong Hui Xiao ce3a31faff Don't update metadata_proxy if metadata is not enabled
When enable_metadata_proxy is false, the agent instance will
not have metadata_driver. And agent should avoid using it.

Change-Id: Ia18dc5dea23de49b97c8f225532531eb9232fb51
Closes-Bug: #1510399
2015-10-28 05:57:43 -04:00
Hong Hui Xiao 9d65841200 The exception type is wrong and makes the except block not work
According to the context, it should be KeyError here to catch.
AttributeError will not happen here. More details could be found
in the bug report.

Change-Id: Id6351172703ac492e86475f75bf1be03f4e4e8a3
Closes-bug: #1506934
2015-10-16 15:12:20 -04:00
Michael Smith f63366e615 L3 Agent support for routers with HA and DVR
The main difference for DVR HA routers is where
the VRRP/keepalived logic is run and which ports
fall in the HA domain for DVR.  Instead of running
in the qrouter namespace, keepalived will run inside
the snat-namespace.  Therefore only snat ports will
fall under the control of the HA domain.

Partial-Bug: #1365473

Change-Id: If2962580397d39f72fd1fbbc1188a6958f00ff0c
Co-Authored-By: Michael Smith <michael.smith6@hp.com>
Co-Authored-By: Hardik Italia <hardik.italia@hp.com>
Co-Authored-By: Adolfo Duarte <adolfo.duarte@hp.com>
Co-Authored-By: John Schwarz <jschwarz@redhat.com>
2015-10-06 18:27:25 +03:00
sridhargaddam c89a4fdd88 Configure gw_iface for RAs only in Master HA Router
For an HA Router which does not have any IPv6 subnets in the external network
and when ipv6_gateway is not set, Neutron configures the gateway interface of
the router to receive Router Advts for default route. In an HA router, only
the Master instance has the IP addresses while the Backup instance does not
have any addresses (including LLA). In Kernel version 3.10, when the last
IPv6 address is removed from the interface, IPv6 proc entries corresponding
to the iface are also deleted. This is however reverted in the later versions
of kernel code.

This patch addresses this issue by configuring the proc entry only for the
Master HA Router instance instead of doing it un-conditionally.

Closes-Bug: #1494336
Change-Id: Ibf8e0ff64cda00314f8fa649ef5019c95c2d6004
2015-09-11 14:01:23 +00:00
Ihar Hrachyshka f53a43fd5e ensure_dir: move under neutron.common.utils
There is nothing Linux or agent specific in the function. I need to use
it outside agent code in one of depending patches, hence moving it into
better location while leaving the previous symbol in place, with
deprecation warning, for backwards compatibility.

Change-Id: I252356a72f3c742e57c1b6127275030f0994a221
2015-07-21 16:33:09 +02:00
Oleg Bondarev 6deed4363b Don't pass namespace name in disable_isolated_metadata_proxy
It's not always possible/convenient to get namespace name
when need to disable some process (like metadata process for stale
router, see related bug). Since namespace name is not required
for process manager to disable process we can remove this parameter
from disable_isolated_metadata_proxy()

Change-Id: I0e0da01d9640aa9920f41989804fc6f320c1c1eb
Related-Bug: #1455042
2015-05-14 17:43:28 +03:00
sridhargaddam 2f9b0ce940 Spawn RADVD only in the master HA router
Currently radvd is spawned in all the HA routers irrespective of the
state of the router. This approach has the following issues.

1. While processing the internal router ports (i.e., qr-xxx), ha_router
   removes the LLA of the interface and adds it as a VIP to Keepalived conf.
   Radvd daemon is spawned after this operation in the router namespace
   (if the port is associated with any IPv6 subnets). Radvd notices that
   qr-xxx interface does not have the LLA, so does not transmit any Router
   Advts. In this state, VMs fail to acquire IPv6 addresses because of the
   missing RAs. Radvd does not recover even after keepalived configures the
   LLA of the interface. The only solution is to restart/reload radvd daemon.
   Currently keepalived-state-change monitor does not do any radvd related
   operations when a state transition happens. So we endup in this state
   forever.
2. For all the routers in Backup state, qr-xxx interface does not have LLA
   as it is managed by keepalived and configured only on the Master HA router.
   In such agents syslog is flooded with the messages [1] and this can cause
   loss of other useful info.
   [1] - resetting ipv6-allrouters membership on qr-2e373555-97

This patch implements the following.
1. If the router is already in the Master state, we configure the LLA as a VIP
   in keepalived conf but do not delete the LLA of the internal interface.
2. We spawn radvd only if the router is in the Master State.
3. Keepalived-state-change monitor takes care of enabling/disabling radvd upon
   state transitions.

Closes-Bug: #1440699
Change-Id: I351c71d058170265bbb8b56e1f7a3430bd8828d5
2015-04-23 17:15:15 +05:30
sridhargaddam a1b8a770c1 Some cleanup in L3 HA code
This patch addresses the following.
1. removes the un-used variables.
2. process_monitor (argument to KeepalivedManager) is changed to
   a non-default parameter as its used in spawn, disable methods.

Change-Id: I8b130b21965ed3387e994818be947eb95d73a423
2015-04-01 12:04:07 +00:00
Assaf Muller 79fcf57b37 Move process_ha_router_added/removed from HA agent to router
* Move process_ha_router_added/removed from ha.py to
  ha_router.py, rename them initialize and terminate
* Remove _process_ha_router (Spawns/disables keepalived) from
  process_router (Called when adding/updating and deleting
  a router), move its content to process_router for add/update
  and terminate for delete
* Rename ha_router.spawn_keepalived to enable_keepalived
  (Consistent with disable_keepalived and process_manager
  semantics)

Partially-Implements: bp/restructure-l3-agent
Change-Id: I1f21acdae2ae1faa2c78affaa3f1ce9056487104
2015-03-24 21:11:42 -04:00
Assaf Muller 79f64ab041 Send notification to controller about HA router state change
The L3 agent gets keepalived state change notifications via
a unix domain socket. These events are now batched and
send out as a single RPC to the server. In case the same
router got updated multiple times during the batch period,
only the latest state is sent.

Partially-Implements: blueprint report-ha-router-master
Change-Id: I36834ad3d9e8a49a702f01acc29c7c38f2d48833
2015-03-20 18:03:59 -04:00
Assaf Muller 9bae3b1832 Replace keepalived notifier bash script with Python ip monitor
Previously L3 HA generated a bash script and copied it to a per-router
configuration directory that was visible to that router's keepalived
instance. This patch changes the in-line generated Bash script to a
Python script that can be maintained in the repository.
The bash script was used as a keepalived notifier script, that was invoked
by keepalived whenever a state transition occured. These notifier scripts
may be invoked by keepalived out of order in case it transitions quickly
twice. For example, if the master failed and two slaves fight for the new
master role. One will transition to master, and the other will often
transition to master and then immidiately back to standby. In this case,
the transition scripts were often fired out of order, resulting in the
wrong state being reported.

The proposed approach is to get rid of the keepalived notifier scripts
entirely. Instead, monitor IP changes on the HA device. If the omnipresent
IP address was configured on the HA device, it means that we're looking
at a master instance. If it was deleted, the router transition to standby
or fault.

In order to keep the L3 agent CPU usage down, it will spawn a process
per HA router. That process will start the ip address monitor.
Whenever it gets an IP address change event, it will notify the L3 agent
via a unix domain socket.

Partially-Implements: blueprint report-ha-router-master
Change-Id: I2022bced330d5f108fbedd40548a901225d7ea1c
Closes-Bug: #1402010
Closes-Bug: #1367705
2015-03-18 18:59:33 -04:00
Assaf Muller 89eef89047 Don't delete HA router primary VIP on agent restarts
An HA router's primary VIP was being deleted from the router
namespace when the L3 agent is restarted. Make sure that
doesn't happen and change the functional test to make sure
the bug stays squashed.

Change-Id: I0e5b416152bacf496de54bedee0fca8d3950be2b
Closes-Bug: #1432806
Closes-Bug: #1432785
2015-03-17 18:45:51 -04:00
Carl Baldwin db80198a1b Move internal port processing to router classes
Change-Id: I38b18f6d24c788c96b2d13ccca5d2e9a7a4fcdeb
Partially-Implements: bp/restructure-l3-agent
2015-03-13 14:39:53 +00:00
Jenkins 77fc12f3a7 Merge "Use common agent.linux.utils.ensure_dir method" 2015-03-13 07:41:38 +00:00
Ihar Hrachyshka 22328baf1f Migrate to oslo.log
It's mostly a matter of changing imports to a new location.

Non-obvious changes needed:
* pass overwrite= argument to oslo_context since oslo.log reads context
  from its thread local store and not local.store from incubator
* don't store context at local.store now that there is no code that
  would consume it
* LOG.deprecated() -> versionutils.report_deprecated_feature()
* dropped LOG.audit check from hacking rule since now the method does
  not exist
* WritableLogger is now located in oslo_log.loggers

Dropped log module from the tree. Also dropped local module that is now
of no use (and obsolete, as per oslo team).

Added versionutils back to openstack-common.conf since now we use the
module directly from neutron code and not just as a dependency of some
other oslo-incubator module.

Note: tempest tests are expected to be broken now, so instead of fixing
all the oslo.log related issues for the subtree in this patch, I only
added TODOs with directions for later fix.

Closes-Bug: #1425013
Change-Id: I310e059a815377579de6bb2aa204de168e72571e
2015-03-12 11:22:56 +01:00
zengfagao eba4c2941e Use common agent.linux.utils.ensure_dir method
We repeated os.makedirs(dir, 0o755) in several places. We should use
common neutron.agent.linux.utils.ensure_dir. Unit tests are also added.

Change-Id: Iaeae5ff7dc6676420c000d6501f69a5997ad4b6c
Closes-bug: 1419042
2015-03-11 20:46:31 -07:00
Mike Kolesnik 133a399d4f Add proccess monitor to keepalived
Adding process monitor to keepalived code, so that keepalived processes
launched by the L3 agent will get monitored the same as other processes
launched by the L3 agent.

Old monitoring code was removed since it's not needed anymore.

Implements: blueprint agent-child-processes-status

Change-Id: I94a889ee07286ab3c6cdab9ab15e5aee6fbd133a
2015-03-09 15:31:34 +02:00
Eric Brown 58d737ed52 Use oslo_config choices support
The oslo_config library added support for a choices keyword argument in
version 1.2.0a3.  This commit leverages the use of choices for StrOpts in
Neutron's configuration.

References:
http://docs.openstack.org/developer/oslo.config/#a3
https://bugs.launchpad.net/oslo-incubator/+bug/1123043

Change-Id: I2f935731ed7e1dea6d297dd72960d01cb8859656
2015-03-02 14:48:25 -08:00
Ihar Hrachyshka 7a2a85623d oslo: migrate to namespace-less import paths
Oslo project decided to move away from using oslo.* namespace for all their
libraries [1], so we should migrate to new import path.

This patch applies new paths for:
- oslo.config
- oslo.db
- oslo.i18n
- oslo.messaging
- oslo.middleware
- oslo.rootwrap
- oslo.serialization
- oslo.utils

Added hacking check to enforce new import paths for all oslo libraries.

Updated setup.cfg entry points.

We'll cleanup old imports from oslo-incubator modules on demand or
if/when oslo officially deprecates old namespace in one of the next
cycles.

[1]: https://blueprints.launchpad.net/oslo-incubator/+spec/drop-namespace-packages

Depends-On: https://review.openstack.org/#/c/147248/
Depends-On: https://review.openstack.org/#/c/152292/
Depends-On: https://review.openstack.org/#/c/147240/

Closes-Bug: #1409733
Change-Id: If0dce29a0980206ace9866112be529436194d47e
2015-02-05 15:09:32 +01:00