Fix data migration script error

There are two types of data migration:
1. Automatic - happens during barbican api startup
2. Manual - run on demand when migration is required

The Manual step is done by running the bin/barbican-db-manage.py
script. This script fails with an "UnboundExecutionError" due
to a conflict between SQLAlchemy synching vs Alembic version
processing in checking for the existence of the table. This commit
fixes the issue by replacing the SQLAlchemy metadata calls with
alembic calls by reusing the context/connection from the existing
"op".

Change-Id: I9bf65594a9e76b3f98d67bbd47a9cc7b97298de0
Closes-Bug: #1326654
This commit is contained in:
tsv 2014-06-06 01:43:00 -06:00
parent e5d347b779
commit 08c63fdbcb
1 changed files with 4 additions and 5 deletions

View File

@ -28,13 +28,12 @@ down_revision = '1a0c2cdafb38'
from alembic import op
import sqlalchemy as sa
from barbican.model import repositories as rep
def upgrade():
meta = sa.MetaData()
meta.reflect(bind=rep._ENGINE, only=['secret_store_metadata'])
if 'secret_store_metadata' not in meta.tables.keys():
ctx = op.get_context()
con = op.get_bind()
table_exists = ctx.dialect.has_table(con.engine, 'secret_store_metadata')
if not table_exists:
op.create_table(
'secret_store_metadata',
sa.Column('id', sa.String(length=36), nullable=False),