Commit Graph

16 Commits

Author SHA1 Message Date
Hervé Beraud 5fa48d67a2 Remove six and python 2.7 full support
Six is in use to help us to keep support for python 2.7.
Since the ussuri cycle we decide to remove the python 2.7
support so we can go ahead and also remove six usage from
the python code.

Review process and help
-----------------------
Removing six introduce a lot of changes and an huge amount of modified files
To simplify reviews we decided to split changes into several patches to avoid
painful reviews and avoid mistakes.

To review this patch you can use the six documentation [1] to obtain help and
understand choices.

Additional informations
-----------------------
Changes related to 'six.b(data)' [2]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

six.b [2] encode the given datas in latin-1 in python3 so I did the same
things in this patch.

Latin-1 is equal to iso-8859-1 [3].

This encoding is the default encoding [4] of certain descriptive HTTP
headers.

I suggest to keep latin-1 for the moment and to move to another encoding
in a follow-up patch if needed to move to most powerful encoding (utf8).

HTML4 support utf8 charset and utf8 is the default charset for HTML5 [5].

Note that this commit message is autogenerated and not necesserly contains
changes related to 'six.b'

[1] https://six.readthedocs.io/
[2] https://six.readthedocs.io/#six.b
[3] https://docs.python.org/3/library/codecs.html#standard-encodings
[4] https://www.w3schools.com/charsets/ref_html_8859.asp
[5] https://www.w3schools.com/html/html_charset.asp

Patch 13 of a serie of 28 patches

Change-Id: I09aa3b7ddd93087c3f92c76c893c609cb9473842
2020-04-23 14:49:12 +02:00
Zane Bitter ec189f4657 Use wait_random_exponential from tenacity 4.4.0
Now that we depend on tenacity >=4.4.0, we can use the library's version of
the wait_random_exponential wait strategy in place of our own.

Change-Id: I13d3222808a98ef7e333f58df931c8f950ac1221
Depends-On: https://review.openstack.org/556309
2018-03-26 10:18:22 -04:00
Zuul 98636290c5 Merge "Support tenacity exponential backoff retry on resource sync" 2018-02-10 01:15:27 +00:00
Zane Bitter 6a176a270c Use a namedtuple for convergence graph nodes
The node key in the convergence graph is a (resource id, update/!cleanup)
tuple. Sometimes it would be convenient to access the members by name, so
convert to a namedtuple.

Change-Id: Id8c159b0137df091e96f1f8d2312395d4a5664ee
2017-09-26 16:46:17 -04:00
ricolin bc83d86255 Support tenacity exponential backoff retry on resource sync
Change to use tenacity as the retry library for SyncPoints.

Use exponential backoff retry waiting time. The amount of jitter per
potential conflict increases 'exponentially' (*cough* geometrically)
with each retry. The number of expected conflicts (which drops over
time) is updated at each attempt. This allows us to discover the right
rate for attempting commits across all resources that are in contention,
while actually reducing the delay between retries for any particular
resource as the number of outstanding resources drops.

Change-Id: I7d5a546a695480df309f22688b239572aa0f897a
Co-Authored-By: Zane Bitter <zbitter@redhat.com>
Closes-Bug: #1591469
2017-07-25 03:45:57 +00:00
Zane Bitter 45e4c53f78 Cache attributes with custom handling
Previously, all caching of attribute values was done via the Attributes
object. However, some resource types override Resource.get_attribute() to
do custom handling of the trailing attribute path or dynamic attribute
names, and in these cases the resulting values were not cached (since they
don't go through the Attributes object).

This patch adds a caching step for these resources:

* OS::Senlin::Cluster
* OS::Heat::ResourceChain
* OS::Heat::ResourceGroup
* OS::Heat::AutoscalingGroup
* OS::Heat::SoftwareDeployment
* OS::Heat::SoftwareDeploymentGroup
* TemplateResource
* AWS::CloudFormation::Stack

Change-Id: I07ac22cc4370a79bd8712e2431fa3272115bc0eb
Co-Authored-By: Crag Wolfe <cwolfe@redhat.com>
Partial-Bug: #1660831
2017-06-27 22:08:03 -04:00
gecong1973 7c389dd2a5 Fix some spelling mistakes in heat as follows:
in heat/contrib/rackspace/rackspace/tests/test_auto_scale.py:270:
       mock nova and glance client methods to satisfy contraints,   contraints should be constraints
   in heat/heat_integrationtests/functional/test_resource_group.py:51
       triggering validation of nested resource custom contraints,  contraints should be constraints
   in heat/heat/common/exception.py:258:
      """Keep this for AWS compatiblility."""    compatiblility should be compatibility
   in heat/heat/engine/resources/openstack/ceilometer/alarm.py:349:
      1) so we don't create watch tasks unneccessarly ,  unneccessarly should be unnecessarily

   in heat/heat/engine/resources/openstack/neutron/vpnservice.py:462:
     The Internet Key Exchange policy identifyies the authentication and , identifyies  should be identifies

   in heat/heat/engine/resources/openstack/nova/server.py:1426:
      if 'security_groups' present for the server and explict 'port' , explict should be explicit

   in heat/heat/engine/service.py:182:
     releasing the lock to avoid race condtitions. condtitions should be conditions

   in heat/heat/engine/sync_point.py:134:
       don't aggresively spin; induce some sleep, aggresively should be aggressively

   in heat/heat/tests/openstack/heat/test_software_deployment.py:889:
       Test bug 1332355, where details contains a translateable message, translateable should be translatable

   in heat/heat/tests/test_environment.py:596:
       make sure the parent env is uneffected, uneffected should be unaffected

   in heat/heat/engine/resources/openstack/nova/server.py:472:
      'ignorning it or by replacing the entire server.'),  ignorning should be ignoring

   in heat/contrib/rackspace/rackspace/resources/cloud_server.py:104:
     'retained for compatability.'),  compatability should be compatibility

   in heat/heat/engine/stack.py:1258:
      " ID %(trvsl_id)s, not trigerring rollback."),  trigerring should be triggering.

Change-Id: Ic4ddb65dbfaf61751a330b853780689209f9f4b5
Closes-Bug: #1595376
2016-06-23 12:39:48 +08:00
Anant Patil f5e7a319cb Convergence: Throttle to sync point updates
Throttle sync point updates by inducing some sleep after each conflict
and before retry. The sleeping time is randomly generated based on
number of potential conflicts. The randomness in sleep time is required
to reduce the number of conflicts when updating sync points.

Closes-Bug: 1529567

Change-Id: Icd36d275a0c9fd15a86de34e79312e2a857d4621
2016-05-31 20:19:40 +05:30
ricolin 0c8d9145da Use EntityNotFound instead of SyncPointNotFound
Unify NotFound exception with Entitynotfound.

Change-Id: I0c69596eb332b768a606c7b11ef768c4a1404d2e
Depends-On: I782c372723f188bab38656e5b7cc401d23808ffb
2016-01-17 06:19:52 +00:00
hgangwx c9abb4744f Wrong usage of "an"
Wrong usage of "an" in the mesages:
"Now it's an subclass of module versions"
"Represents an syncpoint for an stack"
"Creates an sync point entry in DB"

Should be:
"Now it's a subclass of module versions"
"Represents a syncpoint for an stack"
"Creates a sync point entry in DB"

Totally 4 occasions in Heat base code.

Change-Id: I19a0c984a2d19719e4687fbcbec3760866ddab11
2015-12-27 16:12:46 +08:00
Peter Razumovsky 2da170c435 Fix [H405] pep rule in heat/engine
Fix [H405] rule in heat/engine python
files.

Implements bp docstring-improvements

Change-Id: Iaa1541eb03c4db837ef3a0e4eb22393ba32e270f
2015-09-21 14:51:46 +03:00
Anant Patil 4cf262e473 Convergence: Fix failing integration tests
Input data can contain tuple as key when the attribute and path
components are resolved. Converting this to JSON (serializing) fails.
To fix this, recursively look for tuple as keys in input data and
convert them to string and vice-versa while serializing and
deserializing.

Change-Id: I87e496d51004f3374965332921628f5eccb34657
Partial-Bug: #1492116
2015-09-12 08:30:04 +00:00
Angus Salkeld abb69bb554 Convergence: Make SyncPoint.update_input_data actually atomic.
Co-Authored-By: Sirushti Murugesan <sirushti.murugesan@hp.com>
Co-Authored-By: Anant Patil <anant.patil@hp.com>
Change-Id: I3ed7f50d9d48c3c8713c167d2864464c0fefdb70
2015-06-26 15:01:52 +00:00
Angus Salkeld ad104c51bf convergence: sync_point fixes
This is a merge of 4 reviews:
I52f1611d34def3474acba0e5eee054e11c5fc5ad
Ic374a38c9d76763be341d3a80f53fa396c9c2256
Iecd21ccb4392369f66fa1b3a0cf55aad754aeac4
I77b81097d2dcf01efa540237ed5ae14896ed1670

- make sure sender is a tuple (otherwise the serialization
  function in sync_point breaks.)
- Update updated_time on any lifecycle operation(CREATE/UPDATE/DELETE)
  over a stack.
- adjust sync_point logic to account for deletes
   Done by having only a single stack sync point
   for both updates and deletes.
- Serialize/deserialize input_data for RPC
- Make GraphKey's the norm in convergence worker
- move temp_update_requires functionality to tests
  During intial stages of convergence to simulate the entire cycle
  some part of worker code was written in stack.py.
  Now that the convergence worker is implemented, this code needs to
  be executed only in tests.
- Fix dictionary structure that's passed to resoure.(create/update)
- Temporarily disable loading cache_data for stack to help fix other
  issues.

Change-Id: Iecd21ccb4392369f66fa1b3a0cf55aad754aeac4
Co-Authored-by: Sirushti Murugesan <sirushti.murugesan@hp.com>
Co-Authored-by: Rakesh H S <rh-s@hp.com>
2015-06-19 08:24:19 +05:30
Sirushti Murugesan 252ce059c7 Convergence: Check-Resource skeleton
Currently, the patch does the following:

Kicks off workflow from stack.update_or_create:
  Once the dependency graph is calculated, the leaves
  of the graph are all casted into the RPC worker bus.

Worker RPC check_resource worfklow:
  Workers will then start working on each resource
  individually. Once a resource operation is finished,
  sync points are used to check if the parent resource
  can be worked on. Resources that finish early will
  wait for their siblings to finish. The sibling that
  finishes last will trigger the creation/updation/deletion
  of it's parent. This process then goes on for all nodes
  until the roots of the graph are processed.

Marks stack as complete when roots have finished.
  Once the roots of the graph are successfully processed,
  the previous raw template which was needed for rollback
  in case something went wrong will now be deleted from the
  database. The stack is then marked as complete.

Largely follows the convergence prototype code in
github.com/zaneb/heat-convergence-prototype/blob/resumable/converge/converger.py

Implements blueprint convergence-check-workflow

Change-Id: I67cfdc452ba406198c96afba57aa4e756408105d
2015-06-02 00:52:36 +05:30
Rakesh H S 5189bbebab Convergence prepare traversal
Generates the graph for traversal in convergence.

* Updates current traversal for the stack
* Deletes any sync_point entries of previous traversal
* Generates the graph for traversal based on
  - resources loaded from db for the stack
  - resources that exist in present template
* Stores resource.current_template_id and resource.requires
* Stores the edges of graph in stack.current_deps
* Creates sync_points for each node in graph
* Creates sync_point for stack.

blueprint convergence-prepare-traversal

Change-Id: I507e67b39c820ed46d3b269fc76d6cf18d0ef2d7
2015-05-03 17:44:04 +05:30