We keep passing around driver instances as a 'driver' parameter
and track it locally in the instance and state manager as self.driver.
This is actually a resource encapsulated, and we should reference it
as such to avoid being opaque. This renames it accordingly.
It also removes some redundancy where we are passing resource_id along
with a resource object, which contains the id as well.
Change-Id: I65490f01608fda1da3467455ee58ecb5fa6c7873
On startup, if the worker receives messages for pre-populated resources
prior to processing the initial cluster rebalance event, the messages
will be dropped. This fixes the race by tracking when the hash ring
has been initialized. Any events it receives prior to finishing init
will be batched up and processed as part of the initial bootstrapping
procedure.
Change-Id: I3caf95f57380076ab48e4270e1cd575906fba386
Closes-bug: #1554248
This cleans up the worker's handling of rebalance events a bit
and ensures we dont drop state machines in a way that prevents
them from later being recreated. It also avoids a bug where, upon
failing over resources to a new orchestartor, we create a state
machine per worker, instead of dispatching them to one single worker.
To do this, the scheduler is passed into workers as well as the
process name, allowing them to more intelligently figure out what
they need to manage after a cluster event.
Finally, this ensures a config update is issued to appliances after
they have moved to a new orchestrator after a cluster event.
Change-Id: I76bf702c33ac6ff831270e7185a6aa3fc4c464ca
Partial-bug: #1524068
Closes-bug: #1527396
This pushes a couple of flags into the appliance that are specific to the
individual orchestrator instance managing that appliance. Initially, we use
it to tell the appliance where the metadata proxy is listening. Previously,
this was hard-coded to a known address on the network. With multiple
orchestrators in a clustered env, this will allow each to run their own
metadata proxy and have only their managed appliances querying that.
Another patch will follow that will ensure this is up to date when rebalances
occur and orchestrators take over new appliances.
Change-Id: Ib502507b29f17146da81f61f34957cd96a1548f4
Partial-bug: #1524068
When a resource is deleted, its not currently removed from the tenant
resource cache. This causes a cache hit if the tenant attempts to re-create
the same type of resource, but the resource is then later ignored because
it has been deleted. This adds a callback used by the TRM to remove it
from the resource cache when its state machine is deleted.
Change-Id: I5dcbeda7de240a693fc7a4944dd34a37b10d174b
Closes-bug: #1531597
The same exception is handled similary elsewhere. This avoids a corner
case where the worker ends up throwing an exception here and blocking
the state machine from processing future messages.
Change-Id: I14709faac9228797f9ca043e45c550449437e561
Closes-bug: #1536901
In this step all of the imports and usage of akanda.rug is updated to
use astara. Addtionally rename all internal references from Akanda to
Astara.
Change-Id: I0cb8596066d949bceaadc4718b210fc373b5f296
Depends-On: I87106ae63747291bb6424839b5155f53136c54f9
Implements: blueprint convert-to-astara