Add sqlalchemy.exc.OperationalError to the retry decorator
* Currently Mistral retries a DB transaction only in case of a DB deadlock (often happens on MySql) and a connection error. Both make sense to retry because the issue may be temporary. This patch also adds sqlalchemy.exc.OperationalError to the list of retriable exceptions since part of the errors wrapped into this exception may also be temporary, such as "Too many connections" error thrown by MySql. Some errors may not make sense to retry though (like SQL error) but this shouldn't be a problem because most of them will happen during development/testing time and will be fixed before going in production and even if it happens in a real production the worst thing that will happen is retrying a DB transaction up to the maximum configured number of attempts, currently hardcoded 50 times. Change-Id: Ie2fe988cdb8e4ca88c3e51f510d87320d3fca9a6 Closes-Bug: #1796242
This commit is contained in:
parent
9be7e928d6
commit
991734a294
|
@ -12,10 +12,15 @@
|
|||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
import functools
|
||||
|
||||
from sqlalchemy import exc as sqla_exc
|
||||
|
||||
from oslo_db import exception as db_exc
|
||||
from oslo_log import log as logging
|
||||
|
||||
import tenacity
|
||||
|
||||
from mistral import context
|
||||
|
@ -25,7 +30,11 @@ from mistral.services import security
|
|||
|
||||
LOG = logging.getLogger(__name__)
|
||||
|
||||
_RETRY_ERRORS = (db_exc.DBDeadlock, db_exc.DBConnectionError)
|
||||
_RETRY_ERRORS = (
|
||||
db_exc.DBDeadlock,
|
||||
db_exc.DBConnectionError,
|
||||
sqla_exc.OperationalError
|
||||
)
|
||||
|
||||
|
||||
def _with_auth_context(auth_ctx, func, *args, **kw):
|
||||
|
|
Loading…
Reference in New Issue