Retire Packaging Deb project repos
This commit is part of a series to retire the Packaging Deb project. Step 2 is to remove all content from the project repos, replacing it with a README notification where to find ongoing work, and how to recover the repo if needed at some future point (as in https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project). Change-Id: I17ea6db0c1cd6373a2978adc466e5533d5b35aaa
This commit is contained in:
parent
e88131f2c0
commit
8e081c03d2
|
@ -1,5 +0,0 @@
|
|||
[run]
|
||||
include=alembic/*
|
||||
|
||||
[report]
|
||||
omit=alembic/testing/*
|
|
@ -1,13 +0,0 @@
|
|||
*.pyc
|
||||
*.pyo
|
||||
/build/
|
||||
dist/
|
||||
/docs/build/output/
|
||||
*.orig
|
||||
alembic.ini
|
||||
.venv
|
||||
*.egg-info
|
||||
.coverage
|
||||
coverage.xml
|
||||
.tox
|
||||
*.patch
|
|
@ -1,6 +0,0 @@
|
|||
[gerrit]
|
||||
host=gerrit.sqlalchemy.org
|
||||
project=zzzeek/alembic
|
||||
defaultbranch=master
|
||||
port=29418
|
||||
|
15
CHANGES
15
CHANGES
|
@ -1,15 +0,0 @@
|
|||
=====
|
||||
MOVED
|
||||
=====
|
||||
|
||||
Please see:
|
||||
|
||||
/docs/changelog.html
|
||||
|
||||
/docs/build/changelog.rst
|
||||
|
||||
or
|
||||
|
||||
http://alembic.zzzcomputing.com/en/latest/changelog.html
|
||||
|
||||
for the current CHANGES.
|
20
LICENSE
20
LICENSE
|
@ -1,20 +0,0 @@
|
|||
This is the MIT license: http://www.opensource.org/licenses/mit-license.php
|
||||
|
||||
Copyright (C) 2009-2017 by Michael Bayer.
|
||||
Alembic is a trademark of Michael Bayer.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of this
|
||||
software and associated documentation files (the "Software"), to deal in the Software
|
||||
without restriction, including without limitation the rights to use, copy, modify, merge,
|
||||
publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons
|
||||
to whom the Software is furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all copies or
|
||||
substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
|
||||
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
|
||||
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
|
||||
FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
DEALINGS IN THE SOFTWARE.
|
|
@ -1,9 +0,0 @@
|
|||
recursive-include docs *.html *.css *.txt *.js *.jpg *.png *.py Makefile *.rst *.sty
|
||||
recursive-include tests *.py *.dat
|
||||
recursive-include alembic/templates *.mako README *.py
|
||||
|
||||
include README* LICENSE run_tests.py CHANGES* tox.ini
|
||||
|
||||
prune docs/build/output
|
||||
|
||||
|
|
@ -0,0 +1,14 @@
|
|||
This project is no longer maintained.
|
||||
|
||||
The contents of this repository are still available in the Git
|
||||
source code management system. To see the contents of this
|
||||
repository before it reached its end of life, please check out the
|
||||
previous commit with "git checkout HEAD^1".
|
||||
|
||||
For ongoing work on maintaining OpenStack packages in the Debian
|
||||
distribution, please see the Debian OpenStack packaging team at
|
||||
https://wiki.debian.org/OpenStack/.
|
||||
|
||||
For any further questions, please email
|
||||
openstack-dev@lists.openstack.org or join #openstack-dev on
|
||||
Freenode.
|
78
README.rst
78
README.rst
|
@ -1,78 +0,0 @@
|
|||
Alembic is a database migrations tool written by the author
|
||||
of `SQLAlchemy <http://www.sqlalchemy.org>`_. A migrations tool
|
||||
offers the following functionality:
|
||||
|
||||
* Can emit ALTER statements to a database in order to change
|
||||
the structure of tables and other constructs
|
||||
* Provides a system whereby "migration scripts" may be constructed;
|
||||
each script indicates a particular series of steps that can "upgrade" a
|
||||
target database to a new version, and optionally a series of steps that can
|
||||
"downgrade" similarly, doing the same steps in reverse.
|
||||
* Allows the scripts to execute in some sequential manner.
|
||||
|
||||
The goals of Alembic are:
|
||||
|
||||
* Very open ended and transparent configuration and operation. A new
|
||||
Alembic environment is generated from a set of templates which is selected
|
||||
among a set of options when setup first occurs. The templates then deposit a
|
||||
series of scripts that define fully how database connectivity is established
|
||||
and how migration scripts are invoked; the migration scripts themselves are
|
||||
generated from a template within that series of scripts. The scripts can
|
||||
then be further customized to define exactly how databases will be
|
||||
interacted with and what structure new migration files should take.
|
||||
* Full support for transactional DDL. The default scripts ensure that all
|
||||
migrations occur within a transaction - for those databases which support
|
||||
this (Postgresql, Microsoft SQL Server), migrations can be tested with no
|
||||
need to manually undo changes upon failure.
|
||||
* Minimalist script construction. Basic operations like renaming
|
||||
tables/columns, adding/removing columns, changing column attributes can be
|
||||
performed through one line commands like alter_column(), rename_table(),
|
||||
add_constraint(). There is no need to recreate full SQLAlchemy Table
|
||||
structures for simple operations like these - the functions themselves
|
||||
generate minimalist schema structures behind the scenes to achieve the given
|
||||
DDL sequence.
|
||||
* "auto generation" of migrations. While real world migrations are far more
|
||||
complex than what can be automatically determined, Alembic can still
|
||||
eliminate the initial grunt work in generating new migration directives
|
||||
from an altered schema. The ``--autogenerate`` feature will inspect the
|
||||
current status of a database using SQLAlchemy's schema inspection
|
||||
capabilities, compare it to the current state of the database model as
|
||||
specified in Python, and generate a series of "candidate" migrations,
|
||||
rendering them into a new migration script as Python directives. The
|
||||
developer then edits the new file, adding additional directives and data
|
||||
migrations as needed, to produce a finished migration. Table and column
|
||||
level changes can be detected, with constraints and indexes to follow as
|
||||
well.
|
||||
* Full support for migrations generated as SQL scripts. Those of us who
|
||||
work in corporate environments know that direct access to DDL commands on a
|
||||
production database is a rare privilege, and DBAs want textual SQL scripts.
|
||||
Alembic's usage model and commands are oriented towards being able to run a
|
||||
series of migrations into a textual output file as easily as it runs them
|
||||
directly to a database. Care must be taken in this mode to not invoke other
|
||||
operations that rely upon in-memory SELECTs of rows - Alembic tries to
|
||||
provide helper constructs like bulk_insert() to help with data-oriented
|
||||
operations that are compatible with script-based DDL.
|
||||
* Non-linear, dependency-graph versioning. Scripts are given UUID
|
||||
identifiers similarly to a DVCS, and the linkage of one script to the next
|
||||
is achieved via human-editable markers within the scripts themselves.
|
||||
The structure of a set of migration files is considered as a
|
||||
directed-acyclic graph, meaning any migration file can be dependent
|
||||
on any other arbitrary set of migration files, or none at
|
||||
all. Through this open-ended system, migration files can be organized
|
||||
into branches, multiple roots, and mergepoints, without restriction.
|
||||
Commands are provided to produce new branches, roots, and merges of
|
||||
branches automatically.
|
||||
* Provide a library of ALTER constructs that can be used by any SQLAlchemy
|
||||
application. The DDL constructs build upon SQLAlchemy's own DDLElement base
|
||||
and can be used standalone by any application or script.
|
||||
* At long last, bring SQLite and its inablity to ALTER things into the fold,
|
||||
but in such a way that SQLite's very special workflow needs are accommodated
|
||||
in an explicit way that makes the most of a bad situation, through the
|
||||
concept of a "batch" migration, where multiple changes to a table can
|
||||
be batched together to form a series of instructions for a single, subsequent
|
||||
"move-and-copy" workflow. You can even use "move-and-copy" workflow for
|
||||
other databases, if you want to recreate a table in the background
|
||||
on a busy system.
|
||||
|
||||
Documentation and status of Alembic is at http://alembic.zzzcomputing.com/
|
||||
|
|
@ -1,35 +0,0 @@
|
|||
Running Unit Tests
|
||||
==================
|
||||
|
||||
Tests can be run be run using via py.test, via the nose front-end
|
||||
script, or the Python setup.py script::
|
||||
|
||||
py.test
|
||||
|
||||
python run_tests.py
|
||||
|
||||
python setup.py test
|
||||
|
||||
There's also a tox.ini file with several configurations::
|
||||
|
||||
tox
|
||||
|
||||
Setting up Optional Databases
|
||||
------------------------------
|
||||
|
||||
The test suite will attempt to run a subset of tests against various
|
||||
database backends, including Postgresql and MySQL. It uses the database
|
||||
URLs in the [db] section of setup.cfg to locate a URL for particular backend types.
|
||||
If the URL cannot be loaded, either because the requisite DBAPI is
|
||||
not present, or if the target database is found to be not accessible,
|
||||
the test is skipped.
|
||||
|
||||
To run tests for these backends, replace URLs with working ones
|
||||
inside the setup.cfg file. Setting a URL here requires that the
|
||||
corresponding DBAPI is installed as well as that the target database
|
||||
is running. A connection to the database should provide access
|
||||
to a *blank* schema, where tables will be created and dropped. It
|
||||
is critical that this schema have no tables in it already.
|
||||
|
||||
For Postgresql, it is also necessary that the target database contain
|
||||
a user-accessible schema called "test_schema".
|
|
@ -1,15 +0,0 @@
|
|||
from os import path
|
||||
|
||||
__version__ = '0.9.4'
|
||||
|
||||
package_dir = path.abspath(path.dirname(__file__))
|
||||
|
||||
|
||||
from . import op # noqa
|
||||
from . import context # noqa
|
||||
|
||||
import sys
|
||||
from .runtime import environment
|
||||
from .runtime import migration
|
||||
sys.modules['alembic.migration'] = migration
|
||||
sys.modules['alembic.environment'] = environment
|
|
@ -1,8 +0,0 @@
|
|||
from .api import ( # noqa
|
||||
compare_metadata, _render_migration_diffs,
|
||||
produce_migrations, render_python_code,
|
||||
RevisionContext
|
||||
)
|
||||
from .compare import _produce_net_changes, comparators # noqa
|
||||
from .render import render_op_text, renderers # noqa
|
||||
from .rewriter import Rewriter # noqa
|
|
@ -1,480 +0,0 @@
|
|||
"""Provide the 'autogenerate' feature which can produce migration operations
|
||||
automatically."""
|
||||
|
||||
from ..operations import ops
|
||||
from . import render
|
||||
from . import compare
|
||||
from .. import util
|
||||
from sqlalchemy.engine.reflection import Inspector
|
||||
import contextlib
|
||||
|
||||
|
||||
def compare_metadata(context, metadata):
|
||||
"""Compare a database schema to that given in a
|
||||
:class:`~sqlalchemy.schema.MetaData` instance.
|
||||
|
||||
The database connection is presented in the context
|
||||
of a :class:`.MigrationContext` object, which
|
||||
provides database connectivity as well as optional
|
||||
comparison functions to use for datatypes and
|
||||
server defaults - see the "autogenerate" arguments
|
||||
at :meth:`.EnvironmentContext.configure`
|
||||
for details on these.
|
||||
|
||||
The return format is a list of "diff" directives,
|
||||
each representing individual differences::
|
||||
|
||||
from alembic.migration import MigrationContext
|
||||
from alembic.autogenerate import compare_metadata
|
||||
from sqlalchemy.schema import SchemaItem
|
||||
from sqlalchemy.types import TypeEngine
|
||||
from sqlalchemy import (create_engine, MetaData, Column,
|
||||
Integer, String, Table)
|
||||
import pprint
|
||||
|
||||
engine = create_engine("sqlite://")
|
||||
|
||||
engine.execute('''
|
||||
create table foo (
|
||||
id integer not null primary key,
|
||||
old_data varchar,
|
||||
x integer
|
||||
)''')
|
||||
|
||||
engine.execute('''
|
||||
create table bar (
|
||||
data varchar
|
||||
)''')
|
||||
|
||||
metadata = MetaData()
|
||||
Table('foo', metadata,
|
||||
Column('id', Integer, primary_key=True),
|
||||
Column('data', Integer),
|
||||
Column('x', Integer, nullable=False)
|
||||
)
|
||||
Table('bat', metadata,
|
||||
Column('info', String)
|
||||
)
|
||||
|
||||
mc = MigrationContext.configure(engine.connect())
|
||||
|
||||
diff = compare_metadata(mc, metadata)
|
||||
pprint.pprint(diff, indent=2, width=20)
|
||||
|
||||
Output::
|
||||
|
||||
[ ( 'add_table',
|
||||
Table('bat', MetaData(bind=None),
|
||||
Column('info', String(), table=<bat>), schema=None)),
|
||||
( 'remove_table',
|
||||
Table(u'bar', MetaData(bind=None),
|
||||
Column(u'data', VARCHAR(), table=<bar>), schema=None)),
|
||||
( 'add_column',
|
||||
None,
|
||||
'foo',
|
||||
Column('data', Integer(), table=<foo>)),
|
||||
( 'remove_column',
|
||||
None,
|
||||
'foo',
|
||||
Column(u'old_data', VARCHAR(), table=None)),
|
||||
[ ( 'modify_nullable',
|
||||
None,
|
||||
'foo',
|
||||
u'x',
|
||||
{ 'existing_server_default': None,
|
||||
'existing_type': INTEGER()},
|
||||
True,
|
||||
False)]]
|
||||
|
||||
|
||||
:param context: a :class:`.MigrationContext`
|
||||
instance.
|
||||
:param metadata: a :class:`~sqlalchemy.schema.MetaData`
|
||||
instance.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:func:`.produce_migrations` - produces a :class:`.MigrationScript`
|
||||
structure based on metadata comparison.
|
||||
|
||||
"""
|
||||
|
||||
migration_script = produce_migrations(context, metadata)
|
||||
return migration_script.upgrade_ops.as_diffs()
|
||||
|
||||
|
||||
def produce_migrations(context, metadata):
|
||||
"""Produce a :class:`.MigrationScript` structure based on schema
|
||||
comparison.
|
||||
|
||||
This function does essentially what :func:`.compare_metadata` does,
|
||||
but then runs the resulting list of diffs to produce the full
|
||||
:class:`.MigrationScript` object. For an example of what this looks like,
|
||||
see the example in :ref:`customizing_revision`.
|
||||
|
||||
.. versionadded:: 0.8.0
|
||||
|
||||
.. seealso::
|
||||
|
||||
:func:`.compare_metadata` - returns more fundamental "diff"
|
||||
data from comparing a schema.
|
||||
|
||||
"""
|
||||
|
||||
autogen_context = AutogenContext(context, metadata=metadata)
|
||||
|
||||
migration_script = ops.MigrationScript(
|
||||
rev_id=None,
|
||||
upgrade_ops=ops.UpgradeOps([]),
|
||||
downgrade_ops=ops.DowngradeOps([]),
|
||||
)
|
||||
|
||||
compare._populate_migration_script(autogen_context, migration_script)
|
||||
|
||||
return migration_script
|
||||
|
||||
|
||||
def render_python_code(
|
||||
up_or_down_op,
|
||||
sqlalchemy_module_prefix='sa.',
|
||||
alembic_module_prefix='op.',
|
||||
render_as_batch=False,
|
||||
imports=(),
|
||||
render_item=None,
|
||||
):
|
||||
"""Render Python code given an :class:`.UpgradeOps` or
|
||||
:class:`.DowngradeOps` object.
|
||||
|
||||
This is a convenience function that can be used to test the
|
||||
autogenerate output of a user-defined :class:`.MigrationScript` structure.
|
||||
|
||||
"""
|
||||
opts = {
|
||||
'sqlalchemy_module_prefix': sqlalchemy_module_prefix,
|
||||
'alembic_module_prefix': alembic_module_prefix,
|
||||
'render_item': render_item,
|
||||
'render_as_batch': render_as_batch,
|
||||
}
|
||||
|
||||
autogen_context = AutogenContext(None, opts=opts)
|
||||
autogen_context.imports = set(imports)
|
||||
return render._indent(render._render_cmd_body(
|
||||
up_or_down_op, autogen_context))
|
||||
|
||||
|
||||
def _render_migration_diffs(context, template_args):
|
||||
"""legacy, used by test_autogen_composition at the moment"""
|
||||
|
||||
autogen_context = AutogenContext(context)
|
||||
|
||||
upgrade_ops = ops.UpgradeOps([])
|
||||
compare._produce_net_changes(autogen_context, upgrade_ops)
|
||||
|
||||
migration_script = ops.MigrationScript(
|
||||
rev_id=None,
|
||||
upgrade_ops=upgrade_ops,
|
||||
downgrade_ops=upgrade_ops.reverse(),
|
||||
)
|
||||
|
||||
render._render_python_into_templatevars(
|
||||
autogen_context, migration_script, template_args
|
||||
)
|
||||
|
||||
|
||||
class AutogenContext(object):
|
||||
"""Maintains configuration and state that's specific to an
|
||||
autogenerate operation."""
|
||||
|
||||
metadata = None
|
||||
"""The :class:`~sqlalchemy.schema.MetaData` object
|
||||
representing the destination.
|
||||
|
||||
This object is the one that is passed within ``env.py``
|
||||
to the :paramref:`.EnvironmentContext.configure.target_metadata`
|
||||
parameter. It represents the structure of :class:`.Table` and other
|
||||
objects as stated in the current database model, and represents the
|
||||
destination structure for the database being examined.
|
||||
|
||||
While the :class:`~sqlalchemy.schema.MetaData` object is primarily
|
||||
known as a collection of :class:`~sqlalchemy.schema.Table` objects,
|
||||
it also has an :attr:`~sqlalchemy.schema.MetaData.info` dictionary
|
||||
that may be used by end-user schemes to store additional schema-level
|
||||
objects that are to be compared in custom autogeneration schemes.
|
||||
|
||||
"""
|
||||
|
||||
connection = None
|
||||
"""The :class:`~sqlalchemy.engine.base.Connection` object currently
|
||||
connected to the database backend being compared.
|
||||
|
||||
This is obtained from the :attr:`.MigrationContext.bind` and is
|
||||
utimately set up in the ``env.py`` script.
|
||||
|
||||
"""
|
||||
|
||||
dialect = None
|
||||
"""The :class:`~sqlalchemy.engine.Dialect` object currently in use.
|
||||
|
||||
This is normally obtained from the
|
||||
:attr:`~sqlalchemy.engine.base.Connection.dialect` attribute.
|
||||
|
||||
"""
|
||||
|
||||
imports = None
|
||||
"""A ``set()`` which contains string Python import directives.
|
||||
|
||||
The directives are to be rendered into the ``${imports}`` section
|
||||
of a script template. The set is normally empty and can be modified
|
||||
within hooks such as the :paramref:`.EnvironmentContext.configure.render_item`
|
||||
hook.
|
||||
|
||||
.. versionadded:: 0.8.3
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`autogen_render_types`
|
||||
|
||||
"""
|
||||
|
||||
migration_context = None
|
||||
"""The :class:`.MigrationContext` established by the ``env.py`` script."""
|
||||
|
||||
def __init__(
|
||||
self, migration_context, metadata=None,
|
||||
opts=None, autogenerate=True):
|
||||
|
||||
if autogenerate and \
|
||||
migration_context is not None and migration_context.as_sql:
|
||||
raise util.CommandError(
|
||||
"autogenerate can't use as_sql=True as it prevents querying "
|
||||
"the database for schema information")
|
||||
|
||||
if opts is None:
|
||||
opts = migration_context.opts
|
||||
|
||||
self.metadata = metadata = opts.get('target_metadata', None) \
|
||||
if metadata is None else metadata
|
||||
|
||||
if autogenerate and metadata is None and \
|
||||
migration_context is not None and \
|
||||
migration_context.script is not None:
|
||||
raise util.CommandError(
|
||||
"Can't proceed with --autogenerate option; environment "
|
||||
"script %s does not provide "
|
||||
"a MetaData object or sequence of objects to the context." % (
|
||||
migration_context.script.env_py_location
|
||||
))
|
||||
|
||||
include_symbol = opts.get('include_symbol', None)
|
||||
include_object = opts.get('include_object', None)
|
||||
|
||||
object_filters = []
|
||||
if include_symbol:
|
||||
def include_symbol_filter(
|
||||
object, name, type_, reflected, compare_to):
|
||||
if type_ == "table":
|
||||
return include_symbol(name, object.schema)
|
||||
else:
|
||||
return True
|
||||
object_filters.append(include_symbol_filter)
|
||||
if include_object:
|
||||
object_filters.append(include_object)
|
||||
|
||||
self._object_filters = object_filters
|
||||
|
||||
self.migration_context = migration_context
|
||||
if self.migration_context is not None:
|
||||
self.connection = self.migration_context.bind
|
||||
self.dialect = self.migration_context.dialect
|
||||
|
||||
self.imports = set()
|
||||
self.opts = opts
|
||||
self._has_batch = False
|
||||
|
||||
@util.memoized_property
|
||||
def inspector(self):
|
||||
return Inspector.from_engine(self.connection)
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _within_batch(self):
|
||||
self._has_batch = True
|
||||
yield
|
||||
self._has_batch = False
|
||||
|
||||
def run_filters(self, object_, name, type_, reflected, compare_to):
|
||||
"""Run the context's object filters and return True if the targets
|
||||
should be part of the autogenerate operation.
|
||||
|
||||
This method should be run for every kind of object encountered within
|
||||
an autogenerate operation, giving the environment the chance
|
||||
to filter what objects should be included in the comparison.
|
||||
The filters here are produced directly via the
|
||||
:paramref:`.EnvironmentContext.configure.include_object`
|
||||
and :paramref:`.EnvironmentContext.configure.include_symbol`
|
||||
functions, if present.
|
||||
|
||||
"""
|
||||
for fn in self._object_filters:
|
||||
if not fn(object_, name, type_, reflected, compare_to):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
@util.memoized_property
|
||||
def sorted_tables(self):
|
||||
"""Return an aggregate of the :attr:`.MetaData.sorted_tables` collection(s).
|
||||
|
||||
For a sequence of :class:`.MetaData` objects, this
|
||||
concatenates the :attr:`.MetaData.sorted_tables` collection
|
||||
for each individual :class:`.MetaData` in the order of the
|
||||
sequence. It does **not** collate the sorted tables collections.
|
||||
|
||||
.. versionadded:: 0.9.0
|
||||
|
||||
"""
|
||||
result = []
|
||||
for m in util.to_list(self.metadata):
|
||||
result.extend(m.sorted_tables)
|
||||
return result
|
||||
|
||||
@util.memoized_property
|
||||
def table_key_to_table(self):
|
||||
"""Return an aggregate of the :attr:`.MetaData.tables` dictionaries.
|
||||
|
||||
The :attr:`.MetaData.tables` collection is a dictionary of table key
|
||||
to :class:`.Table`; this method aggregates the dictionary across
|
||||
multiple :class:`.MetaData` objects into one dictionary.
|
||||
|
||||
Duplicate table keys are **not** supported; if two :class:`.MetaData`
|
||||
objects contain the same table key, an exception is raised.
|
||||
|
||||
.. versionadded:: 0.9.0
|
||||
|
||||
"""
|
||||
result = {}
|
||||
for m in util.to_list(self.metadata):
|
||||
intersect = set(result).intersection(set(m.tables))
|
||||
if intersect:
|
||||
raise ValueError(
|
||||
"Duplicate table keys across multiple "
|
||||
"MetaData objects: %s" %
|
||||
(", ".join('"%s"' % key for key in sorted(intersect)))
|
||||
)
|
||||
|
||||
result.update(m.tables)
|
||||
return result
|
||||
|
||||
|
||||
class RevisionContext(object):
|
||||
"""Maintains configuration and state that's specific to a revision
|
||||
file generation operation."""
|
||||
|
||||
def __init__(self, config, script_directory, command_args,
|
||||
process_revision_directives=None):
|
||||
self.config = config
|
||||
self.script_directory = script_directory
|
||||
self.command_args = command_args
|
||||
self.process_revision_directives = process_revision_directives
|
||||
self.template_args = {
|
||||
'config': config # Let templates use config for
|
||||
# e.g. multiple databases
|
||||
}
|
||||
self.generated_revisions = [
|
||||
self._default_revision()
|
||||
]
|
||||
|
||||
def _to_script(self, migration_script):
|
||||
template_args = {}
|
||||
for k, v in self.template_args.items():
|
||||
template_args.setdefault(k, v)
|
||||
|
||||
if getattr(migration_script, '_needs_render', False):
|
||||
autogen_context = self._last_autogen_context
|
||||
|
||||
# clear out existing imports if we are doing multiple
|
||||
# renders
|
||||
autogen_context.imports = set()
|
||||
if migration_script.imports:
|
||||
autogen_context.imports.union_update(migration_script.imports)
|
||||
render._render_python_into_templatevars(
|
||||
autogen_context, migration_script, template_args
|
||||
)
|
||||
|
||||
return self.script_directory.generate_revision(
|
||||
migration_script.rev_id,
|
||||
migration_script.message,
|
||||
refresh=True,
|
||||
head=migration_script.head,
|
||||
splice=migration_script.splice,
|
||||
branch_labels=migration_script.branch_label,
|
||||
version_path=migration_script.version_path,
|
||||
depends_on=migration_script.depends_on,
|
||||
**template_args)
|
||||
|
||||
def run_autogenerate(self, rev, migration_context):
|
||||
self._run_environment(rev, migration_context, True)
|
||||
|
||||
def run_no_autogenerate(self, rev, migration_context):
|
||||
self._run_environment(rev, migration_context, False)
|
||||
|
||||
def _run_environment(self, rev, migration_context, autogenerate):
|
||||
if autogenerate:
|
||||
if self.command_args['sql']:
|
||||
raise util.CommandError(
|
||||
"Using --sql with --autogenerate does not make any sense")
|
||||
if set(self.script_directory.get_revisions(rev)) != \
|
||||
set(self.script_directory.get_revisions("heads")):
|
||||
raise util.CommandError("Target database is not up to date.")
|
||||
|
||||
upgrade_token = migration_context.opts['upgrade_token']
|
||||
downgrade_token = migration_context.opts['downgrade_token']
|
||||
|
||||
migration_script = self.generated_revisions[-1]
|
||||
if not getattr(migration_script, '_needs_render', False):
|
||||
migration_script.upgrade_ops_list[-1].upgrade_token = upgrade_token
|
||||
migration_script.downgrade_ops_list[-1].downgrade_token = \
|
||||
downgrade_token
|
||||
migration_script._needs_render = True
|
||||
else:
|
||||
migration_script._upgrade_ops.append(
|
||||
ops.UpgradeOps([], upgrade_token=upgrade_token)
|
||||
)
|
||||
migration_script._downgrade_ops.append(
|
||||
ops.DowngradeOps([], downgrade_token=downgrade_token)
|
||||
)
|
||||
|
||||
self._last_autogen_context = autogen_context = \
|
||||
AutogenContext(migration_context, autogenerate=autogenerate)
|
||||
|
||||
if autogenerate:
|
||||
compare._populate_migration_script(
|
||||
autogen_context, migration_script)
|
||||
|
||||
if self.process_revision_directives:
|
||||
self.process_revision_directives(
|
||||
migration_context, rev, self.generated_revisions)
|
||||
|
||||
hook = migration_context.opts['process_revision_directives']
|
||||
if hook:
|
||||
hook(migration_context, rev, self.generated_revisions)
|
||||
|
||||
for migration_script in self.generated_revisions:
|
||||
migration_script._needs_render = True
|
||||
|
||||
def _default_revision(self):
|
||||
op = ops.MigrationScript(
|
||||
rev_id=self.command_args['rev_id'] or util.rev_id(),
|
||||
message=self.command_args['message'],
|
||||
upgrade_ops=ops.UpgradeOps([]),
|
||||
downgrade_ops=ops.DowngradeOps([]),
|
||||
head=self.command_args['head'],
|
||||
splice=self.command_args['splice'],
|
||||
branch_label=self.command_args['branch_label'],
|
||||
version_path=self.command_args['version_path'],
|
||||
depends_on=self.command_args['depends_on']
|
||||
)
|
||||
return op
|
||||
|
||||
def generate_scripts(self):
|
||||
for generated_revision in self.generated_revisions:
|
||||
yield self._to_script(generated_revision)
|
|
@ -1,866 +0,0 @@
|
|||
from sqlalchemy import schema as sa_schema, types as sqltypes
|
||||
from sqlalchemy.engine.reflection import Inspector
|
||||
from sqlalchemy import event
|
||||
from ..operations import ops
|
||||
import logging
|
||||
from .. import util
|
||||
from ..util import compat
|
||||
from ..util import sqla_compat
|
||||
from sqlalchemy.util import OrderedSet
|
||||
import re
|
||||
from .render import _user_defined_render
|
||||
import contextlib
|
||||
from alembic.ddl.base import _fk_spec
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _populate_migration_script(autogen_context, migration_script):
|
||||
upgrade_ops = migration_script.upgrade_ops_list[-1]
|
||||
downgrade_ops = migration_script.downgrade_ops_list[-1]
|
||||
|
||||
_produce_net_changes(autogen_context, upgrade_ops)
|
||||
upgrade_ops.reverse_into(downgrade_ops)
|
||||
|
||||
|
||||
comparators = util.Dispatcher(uselist=True)
|
||||
|
||||
|
||||
def _produce_net_changes(autogen_context, upgrade_ops):
|
||||
|
||||
connection = autogen_context.connection
|
||||
include_schemas = autogen_context.opts.get('include_schemas', False)
|
||||
|
||||
inspector = Inspector.from_engine(connection)
|
||||
|
||||
default_schema = connection.dialect.default_schema_name
|
||||
if include_schemas:
|
||||
schemas = set(inspector.get_schema_names())
|
||||
# replace default schema name with None
|
||||
schemas.discard("information_schema")
|
||||
# replace the "default" schema with None
|
||||
schemas.discard(default_schema)
|
||||
schemas.add(None)
|
||||
else:
|
||||
schemas = [None]
|
||||
|
||||
comparators.dispatch("schema", autogen_context.dialect.name)(
|
||||
autogen_context, upgrade_ops, schemas
|
||||
)
|
||||
|
||||
|
||||
@comparators.dispatch_for("schema")
|
||||
def _autogen_for_tables(autogen_context, upgrade_ops, schemas):
|
||||
inspector = autogen_context.inspector
|
||||
|
||||
conn_table_names = set()
|
||||
|
||||
version_table_schema = \
|
||||
autogen_context.migration_context.version_table_schema
|
||||
version_table = autogen_context.migration_context.version_table
|
||||
|
||||
for s in schemas:
|
||||
tables = set(inspector.get_table_names(schema=s))
|
||||
if s == version_table_schema:
|
||||
tables = tables.difference(
|
||||
[autogen_context.migration_context.version_table]
|
||||
)
|
||||
conn_table_names.update(zip([s] * len(tables), tables))
|
||||
|
||||
metadata_table_names = OrderedSet(
|
||||
[(table.schema, table.name) for table in autogen_context.sorted_tables]
|
||||
).difference([(version_table_schema, version_table)])
|
||||
|
||||
_compare_tables(conn_table_names, metadata_table_names,
|
||||
inspector, upgrade_ops, autogen_context)
|
||||
|
||||
|
||||
def _compare_tables(conn_table_names, metadata_table_names,
|
||||
inspector, upgrade_ops, autogen_context):
|
||||
|
||||
default_schema = inspector.bind.dialect.default_schema_name
|
||||
|
||||
# tables coming from the connection will not have "schema"
|
||||
# set if it matches default_schema_name; so we need a list
|
||||
# of table names from local metadata that also have "None" if schema
|
||||
# == default_schema_name. Most setups will be like this anyway but
|
||||
# some are not (see #170)
|
||||
metadata_table_names_no_dflt_schema = OrderedSet([
|
||||
(schema if schema != default_schema else None, tname)
|
||||
for schema, tname in metadata_table_names
|
||||
])
|
||||
|
||||
# to adjust for the MetaData collection storing the tables either
|
||||
# as "schemaname.tablename" or just "tablename", create a new lookup
|
||||
# which will match the "non-default-schema" keys to the Table object.
|
||||
tname_to_table = dict(
|
||||
(
|
||||
no_dflt_schema,
|
||||
autogen_context.table_key_to_table[
|
||||
sa_schema._get_table_key(tname, schema)]
|
||||
)
|
||||
for no_dflt_schema, (schema, tname) in zip(
|
||||
metadata_table_names_no_dflt_schema,
|
||||
metadata_table_names)
|
||||
)
|
||||
metadata_table_names = metadata_table_names_no_dflt_schema
|
||||
|
||||
for s, tname in metadata_table_names.difference(conn_table_names):
|
||||
name = '%s.%s' % (s, tname) if s else tname
|
||||
metadata_table = tname_to_table[(s, tname)]
|
||||
if autogen_context.run_filters(
|
||||
metadata_table, tname, "table", False, None):
|
||||
upgrade_ops.ops.append(
|
||||
ops.CreateTableOp.from_table(metadata_table))
|
||||
log.info("Detected added table %r", name)
|
||||
modify_table_ops = ops.ModifyTableOps(tname, [], schema=s)
|
||||
|
||||
comparators.dispatch("table")(
|
||||
autogen_context, modify_table_ops,
|
||||
s, tname, None, metadata_table
|
||||
)
|
||||
if not modify_table_ops.is_empty():
|
||||
upgrade_ops.ops.append(modify_table_ops)
|
||||
|
||||
removal_metadata = sa_schema.MetaData()
|
||||
for s, tname in conn_table_names.difference(metadata_table_names):
|
||||
name = sa_schema._get_table_key(tname, s)
|
||||
exists = name in removal_metadata.tables
|
||||
t = sa_schema.Table(tname, removal_metadata, schema=s)
|
||||
|
||||
if not exists:
|
||||
event.listen(
|
||||
t,
|
||||
"column_reflect",
|
||||
autogen_context.migration_context.impl.
|
||||
_compat_autogen_column_reflect(inspector))
|
||||
inspector.reflecttable(t, None)
|
||||
if autogen_context.run_filters(t, tname, "table", True, None):
|
||||
upgrade_ops.ops.append(
|
||||
ops.DropTableOp.from_table(t)
|
||||
)
|
||||
log.info("Detected removed table %r", name)
|
||||
|
||||
existing_tables = conn_table_names.intersection(metadata_table_names)
|
||||
|
||||
existing_metadata = sa_schema.MetaData()
|
||||
conn_column_info = {}
|
||||
for s, tname in existing_tables:
|
||||
name = sa_schema._get_table_key(tname, s)
|
||||
exists = name in existing_metadata.tables
|
||||
t = sa_schema.Table(tname, existing_metadata, schema=s)
|
||||
if not exists:
|
||||
event.listen(
|
||||
t,
|
||||
"column_reflect",
|
||||
autogen_context.migration_context.impl.
|
||||
_compat_autogen_column_reflect(inspector))
|
||||
inspector.reflecttable(t, None)
|
||||
conn_column_info[(s, tname)] = t
|
||||
|
||||
for s, tname in sorted(existing_tables, key=lambda x: (x[0] or '', x[1])):
|
||||
s = s or None
|
||||
name = '%s.%s' % (s, tname) if s else tname
|
||||
metadata_table = tname_to_table[(s, tname)]
|
||||
conn_table = existing_metadata.tables[name]
|
||||
|
||||
if autogen_context.run_filters(
|
||||
metadata_table, tname, "table", False,
|
||||
conn_table):
|
||||
|
||||
modify_table_ops = ops.ModifyTableOps(tname, [], schema=s)
|
||||
with _compare_columns(
|
||||
s, tname,
|
||||
conn_table,
|
||||
metadata_table,
|
||||
modify_table_ops, autogen_context, inspector):
|
||||
|
||||
comparators.dispatch("table")(
|
||||
autogen_context, modify_table_ops,
|
||||
s, tname, conn_table, metadata_table
|
||||
)
|
||||
|
||||
if not modify_table_ops.is_empty():
|
||||
upgrade_ops.ops.append(modify_table_ops)
|
||||
|
||||
|
||||
def _make_index(params, conn_table):
|
||||
# TODO: add .info such as 'duplicates_constraint'
|
||||
return sa_schema.Index(
|
||||
params['name'],
|
||||
*[conn_table.c[cname] for cname in params['column_names']],
|
||||
unique=params['unique']
|
||||
)
|
||||
|
||||
|
||||
def _make_unique_constraint(params, conn_table):
|
||||
uq = sa_schema.UniqueConstraint(
|
||||
*[conn_table.c[cname] for cname in params['column_names']],
|
||||
name=params['name']
|
||||
)
|
||||
if 'duplicates_index' in params:
|
||||
uq.info['duplicates_index'] = params['duplicates_index']
|
||||
|
||||
return uq
|
||||
|
||||
|
||||
def _make_foreign_key(params, conn_table):
|
||||
tname = params['referred_table']
|
||||
if params['referred_schema']:
|
||||
tname = "%s.%s" % (params['referred_schema'], tname)
|
||||
|
||||
options = params.get('options', {})
|
||||
|
||||
const = sa_schema.ForeignKeyConstraint(
|
||||
[conn_table.c[cname] for cname in params['constrained_columns']],
|
||||
["%s.%s" % (tname, n) for n in params['referred_columns']],
|
||||
onupdate=options.get('onupdate'),
|
||||
ondelete=options.get('ondelete'),
|
||||
deferrable=options.get('deferrable'),
|
||||
initially=options.get('initially'),
|
||||
name=params['name']
|
||||
)
|
||||
# needed by 0.7
|
||||
conn_table.append_constraint(const)
|
||||
return const
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _compare_columns(schema, tname, conn_table, metadata_table,
|
||||
modify_table_ops, autogen_context, inspector):
|
||||
name = '%s.%s' % (schema, tname) if schema else tname
|
||||
metadata_cols_by_name = dict((c.name, c) for c in metadata_table.c)
|
||||
conn_col_names = dict((c.name, c) for c in conn_table.c)
|
||||
metadata_col_names = OrderedSet(sorted(metadata_cols_by_name))
|
||||
|
||||
for cname in metadata_col_names.difference(conn_col_names):
|
||||
if autogen_context.run_filters(
|
||||
metadata_cols_by_name[cname], cname,
|
||||
"column", False, None):
|
||||
modify_table_ops.ops.append(
|
||||
ops.AddColumnOp.from_column_and_tablename(
|
||||
schema, tname, metadata_cols_by_name[cname])
|
||||
)
|
||||
log.info("Detected added column '%s.%s'", name, cname)
|
||||
|
||||
for colname in metadata_col_names.intersection(conn_col_names):
|
||||
metadata_col = metadata_cols_by_name[colname]
|
||||
conn_col = conn_table.c[colname]
|
||||
if not autogen_context.run_filters(
|
||||
metadata_col, colname, "column", False,
|
||||
conn_col):
|
||||
continue
|
||||
alter_column_op = ops.AlterColumnOp(
|
||||
tname, colname, schema=schema)
|
||||
|
||||
comparators.dispatch("column")(
|
||||
autogen_context, alter_column_op,
|
||||
schema, tname, colname, conn_col, metadata_col
|
||||
)
|
||||
|
||||
if alter_column_op.has_changes():
|
||||
modify_table_ops.ops.append(alter_column_op)
|
||||
|
||||
yield
|
||||
|
||||
for cname in set(conn_col_names).difference(metadata_col_names):
|
||||
if autogen_context.run_filters(
|
||||
conn_table.c[cname], cname,
|
||||
"column", True, None):
|
||||
modify_table_ops.ops.append(
|
||||
ops.DropColumnOp.from_column_and_tablename(
|
||||
schema, tname, conn_table.c[cname]
|
||||
)
|
||||
)
|
||||
log.info("Detected removed column '%s.%s'", name, cname)
|
||||
|
||||
|
||||
class _constraint_sig(object):
|
||||
|
||||
def md_name_to_sql_name(self, context):
|
||||
return self.name
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.const == other.const
|
||||
|
||||
def __ne__(self, other):
|
||||
return self.const != other.const
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self.const)
|
||||
|
||||
|
||||
class _uq_constraint_sig(_constraint_sig):
|
||||
is_index = False
|
||||
is_unique = True
|
||||
|
||||
def __init__(self, const):
|
||||
self.const = const
|
||||
self.name = const.name
|
||||
self.sig = tuple(sorted([col.name for col in const.columns]))
|
||||
|
||||
@property
|
||||
def column_names(self):
|
||||
return [col.name for col in self.const.columns]
|
||||
|
||||
|
||||
class _ix_constraint_sig(_constraint_sig):
|
||||
is_index = True
|
||||
|
||||
def __init__(self, const):
|
||||
self.const = const
|
||||
self.name = const.name
|
||||
self.sig = tuple(sorted([col.name for col in const.columns]))
|
||||
self.is_unique = bool(const.unique)
|
||||
|
||||
def md_name_to_sql_name(self, context):
|
||||
return sqla_compat._get_index_final_name(context.dialect, self.const)
|
||||
|
||||
@property
|
||||
def column_names(self):
|
||||
return sqla_compat._get_index_column_names(self.const)
|
||||
|
||||
|
||||
class _fk_constraint_sig(_constraint_sig):
|
||||
def __init__(self, const, include_options=False):
|
||||
self.const = const
|
||||
self.name = const.name
|
||||
|
||||
(
|
||||
self.source_schema, self.source_table,
|
||||
self.source_columns, self.target_schema, self.target_table,
|
||||
self.target_columns,
|
||||
onupdate, ondelete,
|
||||
deferrable, initially) = _fk_spec(const)
|
||||
|
||||
self.sig = (
|
||||
self.source_schema, self.source_table, tuple(self.source_columns),
|
||||
self.target_schema, self.target_table, tuple(self.target_columns)
|
||||
)
|
||||
if include_options:
|
||||
self.sig += (
|
||||
(None if onupdate.lower() == 'no action'
|
||||
else onupdate.lower())
|
||||
if onupdate else None,
|
||||
(None if ondelete.lower() == 'no action'
|
||||
else ondelete.lower())
|
||||
if ondelete else None,
|
||||
# convert initially + deferrable into one three-state value
|
||||
"initially_deferrable"
|
||||
if initially and initially.lower() == "deferred"
|
||||
else "deferrable" if deferrable
|
||||
else "not deferrable"
|
||||
)
|
||||
|
||||
|
||||
@comparators.dispatch_for("table")
|
||||
def _compare_indexes_and_uniques(
|
||||
autogen_context, modify_ops, schema, tname, conn_table,
|
||||
metadata_table):
|
||||
|
||||
inspector = autogen_context.inspector
|
||||
is_create_table = conn_table is None
|
||||
|
||||
# 1a. get raw indexes and unique constraints from metadata ...
|
||||
metadata_unique_constraints = set(
|
||||
uq for uq in metadata_table.constraints
|
||||
if isinstance(uq, sa_schema.UniqueConstraint)
|
||||
)
|
||||
metadata_indexes = set(metadata_table.indexes)
|
||||
|
||||
conn_uniques = conn_indexes = frozenset()
|
||||
|
||||
supports_unique_constraints = False
|
||||
|
||||
unique_constraints_duplicate_unique_indexes = False
|
||||
|
||||
if conn_table is not None:
|
||||
# 1b. ... and from connection, if the table exists
|
||||
if hasattr(inspector, "get_unique_constraints"):
|
||||
try:
|
||||
conn_uniques = inspector.get_unique_constraints(
|
||||
tname, schema=schema)
|
||||
supports_unique_constraints = True
|
||||
except NotImplementedError:
|
||||
pass
|
||||
except TypeError:
|
||||
# number of arguments is off for the base
|
||||
# method in SQLAlchemy due to the cache decorator
|
||||
# not being present
|
||||
pass
|
||||
else:
|
||||
for uq in conn_uniques:
|
||||
if uq.get('duplicates_index'):
|
||||
unique_constraints_duplicate_unique_indexes = True
|
||||
try:
|
||||
conn_indexes = inspector.get_indexes(tname, schema=schema)
|
||||
except NotImplementedError:
|
||||
pass
|
||||
|
||||
# 2. convert conn-level objects from raw inspector records
|
||||
# into schema objects
|
||||
conn_uniques = set(_make_unique_constraint(uq_def, conn_table)
|
||||
for uq_def in conn_uniques)
|
||||
conn_indexes = set(_make_index(ix, conn_table) for ix in conn_indexes)
|
||||
|
||||
# 2a. if the dialect dupes unique indexes as unique constraints
|
||||
# (mysql and oracle), correct for that
|
||||
|
||||
if unique_constraints_duplicate_unique_indexes:
|
||||
_correct_for_uq_duplicates_uix(
|
||||
conn_uniques, conn_indexes,
|
||||
metadata_unique_constraints,
|
||||
metadata_indexes
|
||||
)
|
||||
|
||||
# 3. give the dialect a chance to omit indexes and constraints that
|
||||
# we know are either added implicitly by the DB or that the DB
|
||||
# can't accurately report on
|
||||
autogen_context.migration_context.impl.\
|
||||
correct_for_autogen_constraints(
|
||||
conn_uniques, conn_indexes,
|
||||
metadata_unique_constraints,
|
||||
metadata_indexes)
|
||||
|
||||
# 4. organize the constraints into "signature" collections, the
|
||||
# _constraint_sig() objects provide a consistent facade over both
|
||||
# Index and UniqueConstraint so we can easily work with them
|
||||
# interchangeably
|
||||
metadata_unique_constraints = set(_uq_constraint_sig(uq)
|
||||
for uq in metadata_unique_constraints
|
||||
)
|
||||
|
||||
metadata_indexes = set(_ix_constraint_sig(ix) for ix in metadata_indexes)
|
||||
|
||||
conn_unique_constraints = set(
|
||||
_uq_constraint_sig(uq) for uq in conn_uniques)
|
||||
|
||||
conn_indexes = set(_ix_constraint_sig(ix) for ix in conn_indexes)
|
||||
|
||||
# 5. index things by name, for those objects that have names
|
||||
metadata_names = dict(
|
||||
(c.md_name_to_sql_name(autogen_context), c) for c in
|
||||
metadata_unique_constraints.union(metadata_indexes)
|
||||
if c.name is not None)
|
||||
|
||||
conn_uniques_by_name = dict((c.name, c) for c in conn_unique_constraints)
|
||||
conn_indexes_by_name = dict((c.name, c) for c in conn_indexes)
|
||||
|
||||
conn_names = dict((c.name, c) for c in
|
||||
conn_unique_constraints.union(conn_indexes)
|
||||
if c.name is not None)
|
||||
|
||||
doubled_constraints = dict(
|
||||
(name, (conn_uniques_by_name[name], conn_indexes_by_name[name]))
|
||||
for name in set(
|
||||
conn_uniques_by_name).intersection(conn_indexes_by_name)
|
||||
)
|
||||
|
||||
# 6. index things by "column signature", to help with unnamed unique
|
||||
# constraints.
|
||||
conn_uniques_by_sig = dict((uq.sig, uq) for uq in conn_unique_constraints)
|
||||
metadata_uniques_by_sig = dict(
|
||||
(uq.sig, uq) for uq in metadata_unique_constraints)
|
||||
metadata_indexes_by_sig = dict(
|
||||
(ix.sig, ix) for ix in metadata_indexes)
|
||||
unnamed_metadata_uniques = dict(
|
||||
(uq.sig, uq) for uq in
|
||||
metadata_unique_constraints if uq.name is None)
|
||||
|
||||
# assumptions:
|
||||
# 1. a unique constraint or an index from the connection *always*
|
||||
# has a name.
|
||||
# 2. an index on the metadata side *always* has a name.
|
||||
# 3. a unique constraint on the metadata side *might* have a name.
|
||||
# 4. The backend may double up indexes as unique constraints and
|
||||
# vice versa (e.g. MySQL, Postgresql)
|
||||
|
||||
def obj_added(obj):
|
||||
if obj.is_index:
|
||||
if autogen_context.run_filters(
|
||||
obj.const, obj.name, "index", False, None):
|
||||
modify_ops.ops.append(
|
||||
ops.CreateIndexOp.from_index(obj.const)
|
||||
)
|
||||
log.info("Detected added index '%s' on %s",
|
||||
obj.name, ', '.join([
|
||||
"'%s'" % obj.column_names
|
||||
]))
|
||||
else:
|
||||
if not supports_unique_constraints:
|
||||
# can't report unique indexes as added if we don't
|
||||
# detect them
|
||||
return
|
||||
if is_create_table:
|
||||
# unique constraints are created inline with table defs
|
||||
return
|
||||
if autogen_context.run_filters(
|
||||
obj.const, obj.name,
|
||||
"unique_constraint", False, None):
|
||||
modify_ops.ops.append(
|
||||
ops.AddConstraintOp.from_constraint(obj.const)
|
||||
)
|
||||
log.info("Detected added unique constraint '%s' on %s",
|
||||
obj.name, ', '.join([
|
||||
"'%s'" % obj.column_names
|
||||
]))
|
||||
|
||||
def obj_removed(obj):
|
||||
if obj.is_index:
|
||||
if obj.is_unique and not supports_unique_constraints:
|
||||
# many databases double up unique constraints
|
||||
# as unique indexes. without that list we can't
|
||||
# be sure what we're doing here
|
||||
return
|
||||
|
||||
if autogen_context.run_filters(
|
||||
obj.const, obj.name, "index", True, None):
|
||||
modify_ops.ops.append(
|
||||
ops.DropIndexOp.from_index(obj.const)
|
||||
)
|
||||
log.info(
|
||||
"Detected removed index '%s' on '%s'", obj.name, tname)
|
||||
else:
|
||||
if autogen_context.run_filters(
|
||||
obj.const, obj.name,
|
||||
"unique_constraint", True, None):
|
||||
modify_ops.ops.append(
|
||||
ops.DropConstraintOp.from_constraint(obj.const)
|
||||
)
|
||||
log.info("Detected removed unique constraint '%s' on '%s'",
|
||||
obj.name, tname
|
||||
)
|
||||
|
||||
def obj_changed(old, new, msg):
|
||||
if old.is_index:
|
||||
if autogen_context.run_filters(
|
||||
new.const, new.name, "index",
|
||||
False, old.const):
|
||||
log.info("Detected changed index '%s' on '%s':%s",
|
||||
old.name, tname, ', '.join(msg)
|
||||
)
|
||||
modify_ops.ops.append(
|
||||
ops.DropIndexOp.from_index(old.const)
|
||||
)
|
||||
modify_ops.ops.append(
|
||||
ops.CreateIndexOp.from_index(new.const)
|
||||
)
|
||||
else:
|
||||
if autogen_context.run_filters(
|
||||
new.const, new.name,
|
||||
"unique_constraint", False, old.const):
|
||||
log.info("Detected changed unique constraint '%s' on '%s':%s",
|
||||
old.name, tname, ', '.join(msg)
|
||||
)
|
||||
modify_ops.ops.append(
|
||||
ops.DropConstraintOp.from_constraint(old.const)
|
||||
)
|
||||
modify_ops.ops.append(
|
||||
ops.AddConstraintOp.from_constraint(new.const)
|
||||
)
|
||||
|
||||
for added_name in sorted(set(metadata_names).difference(conn_names)):
|
||||
obj = metadata_names[added_name]
|
||||
obj_added(obj)
|
||||
|
||||
for existing_name in sorted(set(metadata_names).intersection(conn_names)):
|
||||
metadata_obj = metadata_names[existing_name]
|
||||
|
||||
if existing_name in doubled_constraints:
|
||||
conn_uq, conn_idx = doubled_constraints[existing_name]
|
||||
if metadata_obj.is_index:
|
||||
conn_obj = conn_idx
|
||||
else:
|
||||
conn_obj = conn_uq
|
||||
else:
|
||||
conn_obj = conn_names[existing_name]
|
||||
|
||||
if conn_obj.is_index != metadata_obj.is_index:
|
||||
obj_removed(conn_obj)
|
||||
obj_added(metadata_obj)
|
||||
else:
|
||||
msg = []
|
||||
if conn_obj.is_unique != metadata_obj.is_unique:
|
||||
msg.append(' unique=%r to unique=%r' % (
|
||||
conn_obj.is_unique, metadata_obj.is_unique
|
||||
))
|
||||
if conn_obj.sig != metadata_obj.sig:
|
||||
msg.append(' columns %r to %r' % (
|
||||
conn_obj.sig, metadata_obj.sig
|
||||
))
|
||||
|
||||
if msg:
|
||||
obj_changed(conn_obj, metadata_obj, msg)
|
||||
|
||||
for removed_name in sorted(set(conn_names).difference(metadata_names)):
|
||||
conn_obj = conn_names[removed_name]
|
||||
if not conn_obj.is_index and conn_obj.sig in unnamed_metadata_uniques:
|
||||
continue
|
||||
elif removed_name in doubled_constraints:
|
||||
if conn_obj.sig not in metadata_indexes_by_sig and \
|
||||
conn_obj.sig not in metadata_uniques_by_sig:
|
||||
conn_uq, conn_idx = doubled_constraints[removed_name]
|
||||
obj_removed(conn_uq)
|
||||
obj_removed(conn_idx)
|
||||
else:
|
||||
obj_removed(conn_obj)
|
||||
|
||||
for uq_sig in unnamed_metadata_uniques:
|
||||
if uq_sig not in conn_uniques_by_sig:
|
||||
obj_added(unnamed_metadata_uniques[uq_sig])
|
||||
|
||||
|
||||
def _correct_for_uq_duplicates_uix(
|
||||
conn_unique_constraints,
|
||||
conn_indexes,
|
||||
metadata_unique_constraints,
|
||||
metadata_indexes):
|
||||
|
||||
# dedupe unique indexes vs. constraints, since MySQL / Oracle
|
||||
# doesn't really have unique constraints as a separate construct.
|
||||
# but look in the metadata and try to maintain constructs
|
||||
# that already seem to be defined one way or the other
|
||||
# on that side. This logic was formerly local to MySQL dialect,
|
||||
# generalized to Oracle and others. See #276
|
||||
metadata_uq_names = set([
|
||||
cons.name for cons in metadata_unique_constraints
|
||||
if cons.name is not None])
|
||||
|
||||
unnamed_metadata_uqs = set([
|
||||
_uq_constraint_sig(cons).sig
|
||||
for cons in metadata_unique_constraints
|
||||
if cons.name is None
|
||||
])
|
||||
|
||||
metadata_ix_names = set([
|
||||
cons.name for cons in metadata_indexes if cons.unique])
|
||||
conn_ix_names = dict(
|
||||
(cons.name, cons) for cons in conn_indexes if cons.unique
|
||||
)
|
||||
|
||||
uqs_dupe_indexes = dict(
|
||||
(cons.name, cons) for cons in conn_unique_constraints
|
||||
if cons.info['duplicates_index']
|
||||
)
|
||||
for overlap in uqs_dupe_indexes:
|
||||
if overlap not in metadata_uq_names:
|
||||
if _uq_constraint_sig(uqs_dupe_indexes[overlap]).sig \
|
||||
not in unnamed_metadata_uqs:
|
||||
|
||||
conn_unique_constraints.discard(uqs_dupe_indexes[overlap])
|
||||
elif overlap not in metadata_ix_names:
|
||||
conn_indexes.discard(conn_ix_names[overlap])
|
||||
|
||||
|
||||
@comparators.dispatch_for("column")
|
||||
def _compare_nullable(
|
||||
autogen_context, alter_column_op, schema, tname, cname, conn_col,
|
||||
metadata_col):
|
||||
|
||||
# work around SQLAlchemy issue #3023
|
||||
if metadata_col.primary_key:
|
||||
return
|
||||
|
||||
metadata_col_nullable = metadata_col.nullable
|
||||
conn_col_nullable = conn_col.nullable
|
||||
alter_column_op.existing_nullable = conn_col_nullable
|
||||
|
||||
if conn_col_nullable is not metadata_col_nullable:
|
||||
alter_column_op.modify_nullable = metadata_col_nullable
|
||||
log.info("Detected %s on column '%s.%s'",
|
||||
"NULL" if metadata_col_nullable else "NOT NULL",
|
||||
tname,
|
||||
cname
|
||||
)
|
||||
|
||||
|
||||
@comparators.dispatch_for("column")
|
||||
def _setup_autoincrement(
|
||||
autogen_context, alter_column_op, schema, tname, cname, conn_col,
|
||||
metadata_col):
|
||||
|
||||
if metadata_col.table._autoincrement_column is metadata_col:
|
||||
alter_column_op.kw['autoincrement'] = True
|
||||
elif util.sqla_110 and metadata_col.autoincrement is True:
|
||||
alter_column_op.kw['autoincrement'] = True
|
||||
elif metadata_col.autoincrement is False:
|
||||
alter_column_op.kw['autoincrement'] = False
|
||||
|
||||
|
||||
@comparators.dispatch_for("column")
|
||||
def _compare_type(
|
||||
autogen_context, alter_column_op, schema, tname, cname, conn_col,
|
||||
metadata_col):
|
||||
|
||||
conn_type = conn_col.type
|
||||
alter_column_op.existing_type = conn_type
|
||||
metadata_type = metadata_col.type
|
||||
if conn_type._type_affinity is sqltypes.NullType:
|
||||
log.info("Couldn't determine database type "
|
||||
"for column '%s.%s'", tname, cname)
|
||||
return
|
||||
if metadata_type._type_affinity is sqltypes.NullType:
|
||||
log.info("Column '%s.%s' has no type within "
|
||||
"the model; can't compare", tname, cname)
|
||||
return
|
||||
|
||||
isdiff = autogen_context.migration_context._compare_type(
|
||||
conn_col, metadata_col)
|
||||
|
||||
if isdiff:
|
||||
alter_column_op.modify_type = metadata_type
|
||||
log.info("Detected type change from %r to %r on '%s.%s'",
|
||||
conn_type, metadata_type, tname, cname
|
||||
)
|
||||
|
||||
|
||||
def _render_server_default_for_compare(metadata_default,
|
||||
metadata_col, autogen_context):
|
||||
rendered = _user_defined_render(
|
||||
"server_default", metadata_default, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
|
||||
if isinstance(metadata_default, sa_schema.DefaultClause):
|
||||
if isinstance(metadata_default.arg, compat.string_types):
|
||||
metadata_default = metadata_default.arg
|
||||
else:
|
||||
metadata_default = str(metadata_default.arg.compile(
|
||||
dialect=autogen_context.dialect))
|
||||
if isinstance(metadata_default, compat.string_types):
|
||||
if metadata_col.type._type_affinity is sqltypes.String:
|
||||
metadata_default = re.sub(r"^'|'$", "", metadata_default)
|
||||
return repr(metadata_default)
|
||||
else:
|
||||
return metadata_default
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
@comparators.dispatch_for("column")
|
||||
def _compare_server_default(
|
||||
autogen_context, alter_column_op, schema, tname, cname,
|
||||
conn_col, metadata_col):
|
||||
|
||||
metadata_default = metadata_col.server_default
|
||||
conn_col_default = conn_col.server_default
|
||||
if conn_col_default is None and metadata_default is None:
|
||||
return False
|
||||
rendered_metadata_default = _render_server_default_for_compare(
|
||||
metadata_default, metadata_col, autogen_context)
|
||||
|
||||
rendered_conn_default = conn_col.server_default.arg.text \
|
||||
if conn_col.server_default else None
|
||||
|
||||
alter_column_op.existing_server_default = conn_col_default
|
||||
|
||||
isdiff = autogen_context.migration_context._compare_server_default(
|
||||
conn_col, metadata_col,
|
||||
rendered_metadata_default,
|
||||
rendered_conn_default
|
||||
)
|
||||
if isdiff:
|
||||
alter_column_op.modify_server_default = metadata_default
|
||||
log.info(
|
||||
"Detected server default on column '%s.%s'",
|
||||
tname, cname)
|
||||
|
||||
|
||||
@comparators.dispatch_for("table")
|
||||
def _compare_foreign_keys(
|
||||
autogen_context, modify_table_ops, schema, tname, conn_table,
|
||||
metadata_table):
|
||||
|
||||
# if we're doing CREATE TABLE, all FKs are created
|
||||
# inline within the table def
|
||||
if conn_table is None:
|
||||
return
|
||||
|
||||
inspector = autogen_context.inspector
|
||||
metadata_fks = set(
|
||||
fk for fk in metadata_table.constraints
|
||||
if isinstance(fk, sa_schema.ForeignKeyConstraint)
|
||||
)
|
||||
|
||||
conn_fks = inspector.get_foreign_keys(tname, schema=schema)
|
||||
|
||||
backend_reflects_fk_options = conn_fks and 'options' in conn_fks[0]
|
||||
|
||||
conn_fks = set(_make_foreign_key(const, conn_table) for const in conn_fks)
|
||||
|
||||
# give the dialect a chance to correct the FKs to match more
|
||||
# closely
|
||||
autogen_context.migration_context.impl.\
|
||||
correct_for_autogen_foreignkeys(
|
||||
conn_fks, metadata_fks,
|
||||
)
|
||||
|
||||
metadata_fks = set(
|
||||
_fk_constraint_sig(fk, include_options=backend_reflects_fk_options)
|
||||
for fk in metadata_fks
|
||||
)
|
||||
|
||||
conn_fks = set(
|
||||
_fk_constraint_sig(fk, include_options=backend_reflects_fk_options)
|
||||
for fk in conn_fks
|
||||
)
|
||||
|
||||
conn_fks_by_sig = dict(
|
||||
(c.sig, c) for c in conn_fks
|
||||
)
|
||||
metadata_fks_by_sig = dict(
|
||||
(c.sig, c) for c in metadata_fks
|
||||
)
|
||||
|
||||
metadata_fks_by_name = dict(
|
||||
(c.name, c) for c in metadata_fks if c.name is not None
|
||||
)
|
||||
conn_fks_by_name = dict(
|
||||
(c.name, c) for c in conn_fks if c.name is not None
|
||||
)
|
||||
|
||||
def _add_fk(obj, compare_to):
|
||||
if autogen_context.run_filters(
|
||||
obj.const, obj.name, "foreign_key_constraint", False,
|
||||
compare_to):
|
||||
modify_table_ops.ops.append(
|
||||
ops.CreateForeignKeyOp.from_constraint(const.const)
|
||||
)
|
||||
|
||||
log.info(
|
||||
"Detected added foreign key (%s)(%s) on table %s%s",
|
||||
", ".join(obj.source_columns),
|
||||
", ".join(obj.target_columns),
|
||||
"%s." % obj.source_schema if obj.source_schema else "",
|
||||
obj.source_table)
|
||||
|
||||
def _remove_fk(obj, compare_to):
|
||||
if autogen_context.run_filters(
|
||||
obj.const, obj.name, "foreign_key_constraint", True,
|
||||
compare_to):
|
||||
modify_table_ops.ops.append(
|
||||
ops.DropConstraintOp.from_constraint(obj.const)
|
||||
)
|
||||
log.info(
|
||||
"Detected removed foreign key (%s)(%s) on table %s%s",
|
||||
", ".join(obj.source_columns),
|
||||
", ".join(obj.target_columns),
|
||||
"%s." % obj.source_schema if obj.source_schema else "",
|
||||
obj.source_table)
|
||||
|
||||
# so far it appears we don't need to do this by name at all.
|
||||
# SQLite doesn't preserve constraint names anyway
|
||||
|
||||
for removed_sig in set(conn_fks_by_sig).difference(metadata_fks_by_sig):
|
||||
const = conn_fks_by_sig[removed_sig]
|
||||
if removed_sig not in metadata_fks_by_sig:
|
||||
compare_to = metadata_fks_by_name[const.name].const \
|
||||
if const.name in metadata_fks_by_name else None
|
||||
_remove_fk(const, compare_to)
|
||||
|
||||
for added_sig in set(metadata_fks_by_sig).difference(conn_fks_by_sig):
|
||||
const = metadata_fks_by_sig[added_sig]
|
||||
if added_sig not in conn_fks_by_sig:
|
||||
compare_to = conn_fks_by_name[const.name].const \
|
||||
if const.name in conn_fks_by_name else None
|
||||
_add_fk(const, compare_to)
|
|
@ -1,750 +0,0 @@
|
|||
from sqlalchemy import schema as sa_schema, types as sqltypes, sql
|
||||
from ..operations import ops
|
||||
from ..util import compat
|
||||
import re
|
||||
from ..util.compat import string_types
|
||||
from .. import util
|
||||
from mako.pygen import PythonPrinter
|
||||
from ..util.compat import StringIO
|
||||
|
||||
|
||||
MAX_PYTHON_ARGS = 255
|
||||
|
||||
try:
|
||||
from sqlalchemy.sql.naming import conv
|
||||
|
||||
def _render_gen_name(autogen_context, name):
|
||||
if isinstance(name, conv):
|
||||
return _f_name(_alembic_autogenerate_prefix(autogen_context), name)
|
||||
else:
|
||||
return name
|
||||
except ImportError:
|
||||
def _render_gen_name(autogen_context, name):
|
||||
return name
|
||||
|
||||
|
||||
def _indent(text):
|
||||
text = re.compile(r'^', re.M).sub(" ", text).strip()
|
||||
text = re.compile(r' +$', re.M).sub("", text)
|
||||
return text
|
||||
|
||||
|
||||
def _render_python_into_templatevars(
|
||||
autogen_context, migration_script, template_args):
|
||||
imports = autogen_context.imports
|
||||
|
||||
for upgrade_ops, downgrade_ops in zip(
|
||||
migration_script.upgrade_ops_list,
|
||||
migration_script.downgrade_ops_list):
|
||||
template_args[upgrade_ops.upgrade_token] = _indent(
|
||||
_render_cmd_body(upgrade_ops, autogen_context))
|
||||
template_args[downgrade_ops.downgrade_token] = _indent(
|
||||
_render_cmd_body(downgrade_ops, autogen_context))
|
||||
template_args['imports'] = "\n".join(sorted(imports))
|
||||
|
||||
|
||||
default_renderers = renderers = util.Dispatcher()
|
||||
|
||||
|
||||
def _render_cmd_body(op_container, autogen_context):
|
||||
|
||||
buf = StringIO()
|
||||
printer = PythonPrinter(buf)
|
||||
|
||||
printer.writeline(
|
||||
"# ### commands auto generated by Alembic - please adjust! ###"
|
||||
)
|
||||
|
||||
if not op_container.ops:
|
||||
printer.writeline("pass")
|
||||
else:
|
||||
for op in op_container.ops:
|
||||
lines = render_op(autogen_context, op)
|
||||
|
||||
for line in lines:
|
||||
printer.writeline(line)
|
||||
|
||||
printer.writeline("# ### end Alembic commands ###")
|
||||
|
||||
return buf.getvalue()
|
||||
|
||||
|
||||
def render_op(autogen_context, op):
|
||||
renderer = renderers.dispatch(op)
|
||||
lines = util.to_list(renderer(autogen_context, op))
|
||||
return lines
|
||||
|
||||
|
||||
def render_op_text(autogen_context, op):
|
||||
return "\n".join(render_op(autogen_context, op))
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.ModifyTableOps)
|
||||
def _render_modify_table(autogen_context, op):
|
||||
opts = autogen_context.opts
|
||||
render_as_batch = opts.get('render_as_batch', False)
|
||||
|
||||
if op.ops:
|
||||
lines = []
|
||||
if render_as_batch:
|
||||
with autogen_context._within_batch():
|
||||
lines.append(
|
||||
"with op.batch_alter_table(%r, schema=%r) as batch_op:"
|
||||
% (op.table_name, op.schema)
|
||||
)
|
||||
for t_op in op.ops:
|
||||
t_lines = render_op(autogen_context, t_op)
|
||||
lines.extend(t_lines)
|
||||
lines.append("")
|
||||
else:
|
||||
for t_op in op.ops:
|
||||
t_lines = render_op(autogen_context, t_op)
|
||||
lines.extend(t_lines)
|
||||
|
||||
return lines
|
||||
else:
|
||||
return [
|
||||
"pass"
|
||||
]
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.CreateTableOp)
|
||||
def _add_table(autogen_context, op):
|
||||
table = op.to_table()
|
||||
|
||||
args = [col for col in
|
||||
[_render_column(col, autogen_context) for col in table.columns]
|
||||
if col] + \
|
||||
sorted([rcons for rcons in
|
||||
[_render_constraint(cons, autogen_context) for cons in
|
||||
table.constraints]
|
||||
if rcons is not None
|
||||
])
|
||||
|
||||
if len(args) > MAX_PYTHON_ARGS:
|
||||
args = '*[' + ',\n'.join(args) + ']'
|
||||
else:
|
||||
args = ',\n'.join(args)
|
||||
|
||||
text = "%(prefix)screate_table(%(tablename)r,\n%(args)s" % {
|
||||
'tablename': _ident(op.table_name),
|
||||
'prefix': _alembic_autogenerate_prefix(autogen_context),
|
||||
'args': args,
|
||||
}
|
||||
if op.schema:
|
||||
text += ",\nschema=%r" % _ident(op.schema)
|
||||
for k in sorted(op.kw):
|
||||
text += ",\n%s=%r" % (k.replace(" ", "_"), op.kw[k])
|
||||
text += "\n)"
|
||||
return text
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.DropTableOp)
|
||||
def _drop_table(autogen_context, op):
|
||||
text = "%(prefix)sdrop_table(%(tname)r" % {
|
||||
"prefix": _alembic_autogenerate_prefix(autogen_context),
|
||||
"tname": _ident(op.table_name)
|
||||
}
|
||||
if op.schema:
|
||||
text += ", schema=%r" % _ident(op.schema)
|
||||
text += ")"
|
||||
return text
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.CreateIndexOp)
|
||||
def _add_index(autogen_context, op):
|
||||
index = op.to_index()
|
||||
|
||||
has_batch = autogen_context._has_batch
|
||||
|
||||
if has_batch:
|
||||
tmpl = "%(prefix)screate_index(%(name)r, [%(columns)s], "\
|
||||
"unique=%(unique)r%(kwargs)s)"
|
||||
else:
|
||||
tmpl = "%(prefix)screate_index(%(name)r, %(table)r, [%(columns)s], "\
|
||||
"unique=%(unique)r%(schema)s%(kwargs)s)"
|
||||
|
||||
text = tmpl % {
|
||||
'prefix': _alembic_autogenerate_prefix(autogen_context),
|
||||
'name': _render_gen_name(autogen_context, index.name),
|
||||
'table': _ident(index.table.name),
|
||||
'columns': ", ".join(
|
||||
_get_index_rendered_expressions(index, autogen_context)),
|
||||
'unique': index.unique or False,
|
||||
'schema': (", schema=%r" % _ident(index.table.schema))
|
||||
if index.table.schema else '',
|
||||
'kwargs': (
|
||||
', ' +
|
||||
', '.join(
|
||||
["%s=%s" %
|
||||
(key, _render_potential_expr(val, autogen_context))
|
||||
for key, val in index.kwargs.items()]))
|
||||
if len(index.kwargs) else ''
|
||||
}
|
||||
return text
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.DropIndexOp)
|
||||
def _drop_index(autogen_context, op):
|
||||
has_batch = autogen_context._has_batch
|
||||
|
||||
if has_batch:
|
||||
tmpl = "%(prefix)sdrop_index(%(name)r)"
|
||||
else:
|
||||
tmpl = "%(prefix)sdrop_index(%(name)r, "\
|
||||
"table_name=%(table_name)r%(schema)s)"
|
||||
|
||||
text = tmpl % {
|
||||
'prefix': _alembic_autogenerate_prefix(autogen_context),
|
||||
'name': _render_gen_name(autogen_context, op.index_name),
|
||||
'table_name': _ident(op.table_name),
|
||||
'schema': ((", schema=%r" % _ident(op.schema))
|
||||
if op.schema else '')
|
||||
}
|
||||
return text
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.CreateUniqueConstraintOp)
|
||||
def _add_unique_constraint(autogen_context, op):
|
||||
return [_uq_constraint(op.to_constraint(), autogen_context, True)]
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.CreateForeignKeyOp)
|
||||
def _add_fk_constraint(autogen_context, op):
|
||||
|
||||
args = [
|
||||
repr(
|
||||
_render_gen_name(autogen_context, op.constraint_name)),
|
||||
]
|
||||
if not autogen_context._has_batch:
|
||||
args.append(
|
||||
repr(_ident(op.source_table))
|
||||
)
|
||||
|
||||
args.extend(
|
||||
[
|
||||
repr(_ident(op.referent_table)),
|
||||
repr([_ident(col) for col in op.local_cols]),
|
||||
repr([_ident(col) for col in op.remote_cols])
|
||||
]
|
||||
)
|
||||
|
||||
kwargs = [
|
||||
'referent_schema',
|
||||
'onupdate', 'ondelete', 'initially',
|
||||
'deferrable', 'use_alter'
|
||||
]
|
||||
if not autogen_context._has_batch:
|
||||
kwargs.insert(0, 'source_schema')
|
||||
|
||||
for k in kwargs:
|
||||
if k in op.kw:
|
||||
value = op.kw[k]
|
||||
if value is not None:
|
||||
args.append("%s=%r" % (k, value))
|
||||
|
||||
return "%(prefix)screate_foreign_key(%(args)s)" % {
|
||||
'prefix': _alembic_autogenerate_prefix(autogen_context),
|
||||
'args': ", ".join(args)
|
||||
}
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.CreatePrimaryKeyOp)
|
||||
def _add_pk_constraint(constraint, autogen_context):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.CreateCheckConstraintOp)
|
||||
def _add_check_constraint(constraint, autogen_context):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.DropConstraintOp)
|
||||
def _drop_constraint(autogen_context, op):
|
||||
|
||||
if autogen_context._has_batch:
|
||||
template = "%(prefix)sdrop_constraint"\
|
||||
"(%(name)r, type_=%(type)r)"
|
||||
else:
|
||||
template = "%(prefix)sdrop_constraint"\
|
||||
"(%(name)r, '%(table_name)s'%(schema)s, type_=%(type)r)"
|
||||
|
||||
text = template % {
|
||||
'prefix': _alembic_autogenerate_prefix(autogen_context),
|
||||
'name': _render_gen_name(
|
||||
autogen_context, op.constraint_name),
|
||||
'table_name': _ident(op.table_name),
|
||||
'type': op.constraint_type,
|
||||
'schema': (", schema='%s'" % _ident(op.schema))
|
||||
if op.schema else '',
|
||||
}
|
||||
return text
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.AddColumnOp)
|
||||
def _add_column(autogen_context, op):
|
||||
|
||||
schema, tname, column = op.schema, op.table_name, op.column
|
||||
if autogen_context._has_batch:
|
||||
template = "%(prefix)sadd_column(%(column)s)"
|
||||
else:
|
||||
template = "%(prefix)sadd_column(%(tname)r, %(column)s"
|
||||
if schema:
|
||||
template += ", schema=%(schema)r"
|
||||
template += ")"
|
||||
text = template % {
|
||||
"prefix": _alembic_autogenerate_prefix(autogen_context),
|
||||
"tname": tname,
|
||||
"column": _render_column(column, autogen_context),
|
||||
"schema": schema
|
||||
}
|
||||
return text
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.DropColumnOp)
|
||||
def _drop_column(autogen_context, op):
|
||||
|
||||
schema, tname, column_name = op.schema, op.table_name, op.column_name
|
||||
|
||||
if autogen_context._has_batch:
|
||||
template = "%(prefix)sdrop_column(%(cname)r)"
|
||||
else:
|
||||
template = "%(prefix)sdrop_column(%(tname)r, %(cname)r"
|
||||
if schema:
|
||||
template += ", schema=%(schema)r"
|
||||
template += ")"
|
||||
|
||||
text = template % {
|
||||
"prefix": _alembic_autogenerate_prefix(autogen_context),
|
||||
"tname": _ident(tname),
|
||||
"cname": _ident(column_name),
|
||||
"schema": _ident(schema)
|
||||
}
|
||||
return text
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.AlterColumnOp)
|
||||
def _alter_column(autogen_context, op):
|
||||
|
||||
tname = op.table_name
|
||||
cname = op.column_name
|
||||
server_default = op.modify_server_default
|
||||
type_ = op.modify_type
|
||||
nullable = op.modify_nullable
|
||||
autoincrement = op.kw.get('autoincrement', None)
|
||||
existing_type = op.existing_type
|
||||
existing_nullable = op.existing_nullable
|
||||
existing_server_default = op.existing_server_default
|
||||
schema = op.schema
|
||||
|
||||
indent = " " * 11
|
||||
|
||||
if autogen_context._has_batch:
|
||||
template = "%(prefix)salter_column(%(cname)r"
|
||||
else:
|
||||
template = "%(prefix)salter_column(%(tname)r, %(cname)r"
|
||||
|
||||
text = template % {
|
||||
'prefix': _alembic_autogenerate_prefix(
|
||||
autogen_context),
|
||||
'tname': tname,
|
||||
'cname': cname}
|
||||
if existing_type is not None:
|
||||
text += ",\n%sexisting_type=%s" % (
|
||||
indent,
|
||||
_repr_type(existing_type, autogen_context))
|
||||
if server_default is not False:
|
||||
rendered = _render_server_default(
|
||||
server_default, autogen_context)
|
||||
text += ",\n%sserver_default=%s" % (indent, rendered)
|
||||
|
||||
if type_ is not None:
|
||||
text += ",\n%stype_=%s" % (indent,
|
||||
_repr_type(type_, autogen_context))
|
||||
if nullable is not None:
|
||||
text += ",\n%snullable=%r" % (
|
||||
indent, nullable,)
|
||||
if nullable is None and existing_nullable is not None:
|
||||
text += ",\n%sexisting_nullable=%r" % (
|
||||
indent, existing_nullable)
|
||||
if autoincrement is not None:
|
||||
text += ",\n%sautoincrement=%r" % (
|
||||
indent, autoincrement)
|
||||
if server_default is False and existing_server_default:
|
||||
rendered = _render_server_default(
|
||||
existing_server_default,
|
||||
autogen_context)
|
||||
text += ",\n%sexisting_server_default=%s" % (
|
||||
indent, rendered)
|
||||
if schema and not autogen_context._has_batch:
|
||||
text += ",\n%sschema=%r" % (indent, schema)
|
||||
text += ")"
|
||||
return text
|
||||
|
||||
|
||||
class _f_name(object):
|
||||
|
||||
def __init__(self, prefix, name):
|
||||
self.prefix = prefix
|
||||
self.name = name
|
||||
|
||||
def __repr__(self):
|
||||
return "%sf(%r)" % (self.prefix, _ident(self.name))
|
||||
|
||||
|
||||
def _ident(name):
|
||||
"""produce a __repr__() object for a string identifier that may
|
||||
use quoted_name() in SQLAlchemy 0.9 and greater.
|
||||
|
||||
The issue worked around here is that quoted_name() doesn't have
|
||||
very good repr() behavior by itself when unicode is involved.
|
||||
|
||||
"""
|
||||
if name is None:
|
||||
return name
|
||||
elif util.sqla_09 and isinstance(name, sql.elements.quoted_name):
|
||||
if compat.py2k:
|
||||
# the attempt to encode to ascii here isn't super ideal,
|
||||
# however we are trying to cut down on an explosion of
|
||||
# u'' literals only when py2k + SQLA 0.9, in particular
|
||||
# makes unit tests testing code generation very difficult
|
||||
try:
|
||||
return name.encode('ascii')
|
||||
except UnicodeError:
|
||||
return compat.text_type(name)
|
||||
else:
|
||||
return compat.text_type(name)
|
||||
elif isinstance(name, compat.string_types):
|
||||
return name
|
||||
|
||||
|
||||
def _render_potential_expr(value, autogen_context, wrap_in_text=True):
|
||||
if isinstance(value, sql.ClauseElement):
|
||||
if util.sqla_08:
|
||||
compile_kw = dict(compile_kwargs={
|
||||
'literal_binds': True, "include_table": False})
|
||||
else:
|
||||
compile_kw = {}
|
||||
|
||||
if wrap_in_text:
|
||||
template = "%(prefix)stext(%(sql)r)"
|
||||
else:
|
||||
template = "%(sql)r"
|
||||
|
||||
return template % {
|
||||
"prefix": _sqlalchemy_autogenerate_prefix(autogen_context),
|
||||
"sql": compat.text_type(
|
||||
value.compile(dialect=autogen_context.dialect,
|
||||
**compile_kw)
|
||||
)
|
||||
}
|
||||
|
||||
else:
|
||||
return repr(value)
|
||||
|
||||
|
||||
def _get_index_rendered_expressions(idx, autogen_context):
|
||||
if util.sqla_08:
|
||||
return [repr(_ident(getattr(exp, "name", None)))
|
||||
if isinstance(exp, sa_schema.Column)
|
||||
else _render_potential_expr(exp, autogen_context)
|
||||
for exp in idx.expressions]
|
||||
else:
|
||||
return [
|
||||
repr(_ident(getattr(col, "name", None))) for col in idx.columns]
|
||||
|
||||
|
||||
def _uq_constraint(constraint, autogen_context, alter):
|
||||
opts = []
|
||||
|
||||
has_batch = autogen_context._has_batch
|
||||
|
||||
if constraint.deferrable:
|
||||
opts.append(("deferrable", str(constraint.deferrable)))
|
||||
if constraint.initially:
|
||||
opts.append(("initially", str(constraint.initially)))
|
||||
if not has_batch and alter and constraint.table.schema:
|
||||
opts.append(("schema", _ident(constraint.table.schema)))
|
||||
if not alter and constraint.name:
|
||||
opts.append(
|
||||
("name",
|
||||
_render_gen_name(autogen_context, constraint.name)))
|
||||
|
||||
if alter:
|
||||
args = [
|
||||
repr(_render_gen_name(
|
||||
autogen_context, constraint.name))]
|
||||
if not has_batch:
|
||||
args += [repr(_ident(constraint.table.name))]
|
||||
args.append(repr([_ident(col.name) for col in constraint.columns]))
|
||||
args.extend(["%s=%r" % (k, v) for k, v in opts])
|
||||
return "%(prefix)screate_unique_constraint(%(args)s)" % {
|
||||
'prefix': _alembic_autogenerate_prefix(autogen_context),
|
||||
'args': ", ".join(args)
|
||||
}
|
||||
else:
|
||||
args = [repr(_ident(col.name)) for col in constraint.columns]
|
||||
args.extend(["%s=%r" % (k, v) for k, v in opts])
|
||||
return "%(prefix)sUniqueConstraint(%(args)s)" % {
|
||||
"prefix": _sqlalchemy_autogenerate_prefix(autogen_context),
|
||||
"args": ", ".join(args)
|
||||
}
|
||||
|
||||
|
||||
def _user_autogenerate_prefix(autogen_context, target):
|
||||
prefix = autogen_context.opts['user_module_prefix']
|
||||
if prefix is None:
|
||||
return "%s." % target.__module__
|
||||
else:
|
||||
return prefix
|
||||
|
||||
|
||||
def _sqlalchemy_autogenerate_prefix(autogen_context):
|
||||
return autogen_context.opts['sqlalchemy_module_prefix'] or ''
|
||||
|
||||
|
||||
def _alembic_autogenerate_prefix(autogen_context):
|
||||
if autogen_context._has_batch:
|
||||
return 'batch_op.'
|
||||
else:
|
||||
return autogen_context.opts['alembic_module_prefix'] or ''
|
||||
|
||||
|
||||
def _user_defined_render(type_, object_, autogen_context):
|
||||
if 'render_item' in autogen_context.opts:
|
||||
render = autogen_context.opts['render_item']
|
||||
if render:
|
||||
rendered = render(type_, object_, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
return False
|
||||
|
||||
|
||||
def _render_column(column, autogen_context):
|
||||
rendered = _user_defined_render("column", column, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
|
||||
opts = []
|
||||
if column.server_default:
|
||||
rendered = _render_server_default(
|
||||
column.server_default, autogen_context
|
||||
)
|
||||
if rendered:
|
||||
opts.append(("server_default", rendered))
|
||||
|
||||
if not column.autoincrement:
|
||||
opts.append(("autoincrement", column.autoincrement))
|
||||
|
||||
if column.nullable is not None:
|
||||
opts.append(("nullable", column.nullable))
|
||||
|
||||
# TODO: for non-ascii colname, assign a "key"
|
||||
return "%(prefix)sColumn(%(name)r, %(type)s, %(kw)s)" % {
|
||||
'prefix': _sqlalchemy_autogenerate_prefix(autogen_context),
|
||||
'name': _ident(column.name),
|
||||
'type': _repr_type(column.type, autogen_context),
|
||||
'kw': ", ".join(["%s=%s" % (kwname, val) for kwname, val in opts])
|
||||
}
|
||||
|
||||
|
||||
def _render_server_default(default, autogen_context, repr_=True):
|
||||
rendered = _user_defined_render("server_default", default, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
|
||||
if isinstance(default, sa_schema.DefaultClause):
|
||||
if isinstance(default.arg, compat.string_types):
|
||||
default = default.arg
|
||||
else:
|
||||
return _render_potential_expr(default.arg, autogen_context)
|
||||
|
||||
if isinstance(default, string_types) and repr_:
|
||||
default = repr(re.sub(r"^'|'$", "", default))
|
||||
|
||||
return default
|
||||
|
||||
|
||||
def _repr_type(type_, autogen_context):
|
||||
rendered = _user_defined_render("type", type_, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
|
||||
if hasattr(autogen_context.migration_context, 'impl'):
|
||||
impl_rt = autogen_context.migration_context.impl.render_type(
|
||||
type_, autogen_context)
|
||||
|
||||
mod = type(type_).__module__
|
||||
imports = autogen_context.imports
|
||||
if mod.startswith("sqlalchemy.dialects"):
|
||||
dname = re.match(r"sqlalchemy\.dialects\.(\w+)", mod).group(1)
|
||||
if imports is not None:
|
||||
imports.add("from sqlalchemy.dialects import %s" % dname)
|
||||
if impl_rt:
|
||||
return impl_rt
|
||||
else:
|
||||
return "%s.%r" % (dname, type_)
|
||||
elif mod.startswith("sqlalchemy."):
|
||||
prefix = _sqlalchemy_autogenerate_prefix(autogen_context)
|
||||
return "%s%r" % (prefix, type_)
|
||||
else:
|
||||
prefix = _user_autogenerate_prefix(autogen_context, type_)
|
||||
return "%s%r" % (prefix, type_)
|
||||
|
||||
|
||||
_constraint_renderers = util.Dispatcher()
|
||||
|
||||
|
||||
def _render_constraint(constraint, autogen_context):
|
||||
try:
|
||||
renderer = _constraint_renderers.dispatch(constraint)
|
||||
except ValueError:
|
||||
util.warn("No renderer is established for object %r" % constraint)
|
||||
return "[Unknown Python object %r]" % constraint
|
||||
else:
|
||||
return renderer(constraint, autogen_context)
|
||||
|
||||
|
||||
@_constraint_renderers.dispatch_for(sa_schema.PrimaryKeyConstraint)
|
||||
def _render_primary_key(constraint, autogen_context):
|
||||
rendered = _user_defined_render("primary_key", constraint, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
|
||||
if not constraint.columns:
|
||||
return None
|
||||
|
||||
opts = []
|
||||
if constraint.name:
|
||||
opts.append(("name", repr(
|
||||
_render_gen_name(autogen_context, constraint.name))))
|
||||
return "%(prefix)sPrimaryKeyConstraint(%(args)s)" % {
|
||||
"prefix": _sqlalchemy_autogenerate_prefix(autogen_context),
|
||||
"args": ", ".join(
|
||||
[repr(c.name) for c in constraint.columns] +
|
||||
["%s=%s" % (kwname, val) for kwname, val in opts]
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
def _fk_colspec(fk, metadata_schema):
|
||||
"""Implement a 'safe' version of ForeignKey._get_colspec() that
|
||||
never tries to resolve the remote table.
|
||||
|
||||
"""
|
||||
colspec = fk._get_colspec()
|
||||
tokens = colspec.split(".")
|
||||
tname, colname = tokens[-2:]
|
||||
|
||||
if metadata_schema is not None and len(tokens) == 2:
|
||||
table_fullname = "%s.%s" % (metadata_schema, tname)
|
||||
else:
|
||||
table_fullname = ".".join(tokens[0:-1])
|
||||
|
||||
if fk.parent is not None and fk.parent.table is not None:
|
||||
# try to resolve the remote table and adjust for column.key
|
||||
parent_metadata = fk.parent.table.metadata
|
||||
if table_fullname in parent_metadata.tables:
|
||||
colname = _ident(
|
||||
parent_metadata.tables[table_fullname].c[colname].name)
|
||||
|
||||
colspec = "%s.%s" % (table_fullname, colname)
|
||||
|
||||
return colspec
|
||||
|
||||
|
||||
def _populate_render_fk_opts(constraint, opts):
|
||||
|
||||
if constraint.onupdate:
|
||||
opts.append(("onupdate", repr(constraint.onupdate)))
|
||||
if constraint.ondelete:
|
||||
opts.append(("ondelete", repr(constraint.ondelete)))
|
||||
if constraint.initially:
|
||||
opts.append(("initially", repr(constraint.initially)))
|
||||
if constraint.deferrable:
|
||||
opts.append(("deferrable", repr(constraint.deferrable)))
|
||||
if constraint.use_alter:
|
||||
opts.append(("use_alter", repr(constraint.use_alter)))
|
||||
|
||||
|
||||
@_constraint_renderers.dispatch_for(sa_schema.ForeignKeyConstraint)
|
||||
def _render_foreign_key(constraint, autogen_context):
|
||||
rendered = _user_defined_render("foreign_key", constraint, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
|
||||
opts = []
|
||||
if constraint.name:
|
||||
opts.append(("name", repr(
|
||||
_render_gen_name(autogen_context, constraint.name))))
|
||||
|
||||
_populate_render_fk_opts(constraint, opts)
|
||||
|
||||
apply_metadata_schema = constraint.parent.metadata.schema
|
||||
return "%(prefix)sForeignKeyConstraint([%(cols)s], "\
|
||||
"[%(refcols)s], %(args)s)" % {
|
||||
"prefix": _sqlalchemy_autogenerate_prefix(autogen_context),
|
||||
"cols": ", ".join(
|
||||
"%r" % _ident(f.parent.name) for f in constraint.elements),
|
||||
"refcols": ", ".join(repr(_fk_colspec(f, apply_metadata_schema))
|
||||
for f in constraint.elements),
|
||||
"args": ", ".join(
|
||||
["%s=%s" % (kwname, val) for kwname, val in opts]
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
@_constraint_renderers.dispatch_for(sa_schema.UniqueConstraint)
|
||||
def _render_unique_constraint(constraint, autogen_context):
|
||||
rendered = _user_defined_render("unique", constraint, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
|
||||
return _uq_constraint(constraint, autogen_context, False)
|
||||
|
||||
|
||||
@_constraint_renderers.dispatch_for(sa_schema.CheckConstraint)
|
||||
def _render_check_constraint(constraint, autogen_context):
|
||||
rendered = _user_defined_render("check", constraint, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
|
||||
# detect the constraint being part of
|
||||
# a parent type which is probably in the Table already.
|
||||
# ideally SQLAlchemy would give us more of a first class
|
||||
# way to detect this.
|
||||
if constraint._create_rule and \
|
||||
hasattr(constraint._create_rule, 'target') and \
|
||||
isinstance(constraint._create_rule.target,
|
||||
sqltypes.TypeEngine):
|
||||
return None
|
||||
opts = []
|
||||
if constraint.name:
|
||||
opts.append(
|
||||
(
|
||||
"name",
|
||||
repr(
|
||||
_render_gen_name(
|
||||
autogen_context, constraint.name))
|
||||
)
|
||||
)
|
||||
return "%(prefix)sCheckConstraint(%(sqltext)s%(opts)s)" % {
|
||||
"prefix": _sqlalchemy_autogenerate_prefix(autogen_context),
|
||||
"opts": ", " + (", ".join("%s=%s" % (k, v)
|
||||
for k, v in opts)) if opts else "",
|
||||
"sqltext": _render_potential_expr(
|
||||
constraint.sqltext, autogen_context, wrap_in_text=False)
|
||||
}
|
||||
|
||||
|
||||
@renderers.dispatch_for(ops.ExecuteSQLOp)
|
||||
def _execute_sql(autogen_context, op):
|
||||
if not isinstance(op.sqltext, string_types):
|
||||
raise NotImplementedError(
|
||||
"Autogenerate rendering of SQL Expression language constructs "
|
||||
"not supported here; please use a plain SQL string"
|
||||
)
|
||||
return 'op.execute(%r)' % op.sqltext
|
||||
|
||||
|
||||
renderers = default_renderers.branch()
|
|
@ -1,150 +0,0 @@
|
|||
from alembic import util
|
||||
from alembic.operations import ops
|
||||
|
||||
|
||||
class Rewriter(object):
|
||||
"""A helper object that allows easy 'rewriting' of ops streams.
|
||||
|
||||
The :class:`.Rewriter` object is intended to be passed along
|
||||
to the
|
||||
:paramref:`.EnvironmentContext.configure.process_revision_directives`
|
||||
parameter in an ``env.py`` script. Once constructed, any number
|
||||
of "rewrites" functions can be associated with it, which will be given
|
||||
the opportunity to modify the structure without having to have explicit
|
||||
knowledge of the overall structure.
|
||||
|
||||
The function is passed the :class:`.MigrationContext` object and
|
||||
``revision`` tuple that are passed to the :paramref:`.Environment
|
||||
Context.configure.process_revision_directives` function normally,
|
||||
and the third argument is an individual directive of the type
|
||||
noted in the decorator. The function has the choice of returning
|
||||
a single op directive, which normally can be the directive that
|
||||
was actually passed, or a new directive to replace it, or a list
|
||||
of zero or more directives to replace it.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`autogen_rewriter` - usage example
|
||||
|
||||
.. versionadded:: 0.8
|
||||
|
||||
"""
|
||||
|
||||
_traverse = util.Dispatcher()
|
||||
|
||||
_chained = None
|
||||
|
||||
def __init__(self):
|
||||
self.dispatch = util.Dispatcher()
|
||||
|
||||
def chain(self, other):
|
||||
"""Produce a "chain" of this :class:`.Rewriter` to another.
|
||||
|
||||
This allows two rewriters to operate serially on a stream,
|
||||
e.g.::
|
||||
|
||||
writer1 = autogenerate.Rewriter()
|
||||
writer2 = autogenerate.Rewriter()
|
||||
|
||||
@writer1.rewrites(ops.AddColumnOp)
|
||||
def add_column_nullable(context, revision, op):
|
||||
op.column.nullable = True
|
||||
return op
|
||||
|
||||
@writer2.rewrites(ops.AddColumnOp)
|
||||
def add_column_idx(context, revision, op):
|
||||
idx_op = ops.CreateIndexOp(
|
||||
'ixc', op.table_name, [op.column.name])
|
||||
return [
|
||||
op,
|
||||
idx_op
|
||||
]
|
||||
|
||||
writer = writer1.chain(writer2)
|
||||
|
||||
:param other: a :class:`.Rewriter` instance
|
||||
:return: a new :class:`.Rewriter` that will run the operations
|
||||
of this writer, then the "other" writer, in succession.
|
||||
|
||||
"""
|
||||
wr = self.__class__.__new__(self.__class__)
|
||||
wr.__dict__.update(self.__dict__)
|
||||
wr._chained = other
|
||||
return wr
|
||||
|
||||
def rewrites(self, operator):
|
||||
"""Register a function as rewriter for a given type.
|
||||
|
||||
The function should receive three arguments, which are
|
||||
the :class:`.MigrationContext`, a ``revision`` tuple, and
|
||||
an op directive of the type indicated. E.g.::
|
||||
|
||||
@writer1.rewrites(ops.AddColumnOp)
|
||||
def add_column_nullable(context, revision, op):
|
||||
op.column.nullable = True
|
||||
return op
|
||||
|
||||
"""
|
||||
return self.dispatch.dispatch_for(operator)
|
||||
|
||||
def _rewrite(self, context, revision, directive):
|
||||
try:
|
||||
_rewriter = self.dispatch.dispatch(directive)
|
||||
except ValueError:
|
||||
_rewriter = None
|
||||
yield directive
|
||||
else:
|
||||
for r_directive in util.to_list(
|
||||
_rewriter(context, revision, directive)):
|
||||
yield r_directive
|
||||
|
||||
def __call__(self, context, revision, directives):
|
||||
self.process_revision_directives(context, revision, directives)
|
||||
if self._chained:
|
||||
self._chained(context, revision, directives)
|
||||
|
||||
@_traverse.dispatch_for(ops.MigrationScript)
|
||||
def _traverse_script(self, context, revision, directive):
|
||||
upgrade_ops_list = []
|
||||
for upgrade_ops in directive.upgrade_ops_list:
|
||||
ret = self._traverse_for(context, revision, directive.upgrade_ops)
|
||||
if len(ret) != 1:
|
||||
raise ValueError(
|
||||
"Can only return single object for UpgradeOps traverse")
|
||||
upgrade_ops_list.append(ret[0])
|
||||
directive.upgrade_ops = upgrade_ops_list
|
||||
|
||||
downgrade_ops_list = []
|
||||
for downgrade_ops in directive.downgrade_ops_list:
|
||||
ret = self._traverse_for(
|
||||
context, revision, directive.downgrade_ops)
|
||||
if len(ret) != 1:
|
||||
raise ValueError(
|
||||
"Can only return single object for DowngradeOps traverse")
|
||||
downgrade_ops_list.append(ret[0])
|
||||
directive.downgrade_ops = downgrade_ops_list
|
||||
|
||||
@_traverse.dispatch_for(ops.OpContainer)
|
||||
def _traverse_op_container(self, context, revision, directive):
|
||||
self._traverse_list(context, revision, directive.ops)
|
||||
|
||||
@_traverse.dispatch_for(ops.MigrateOperation)
|
||||
def _traverse_any_directive(self, context, revision, directive):
|
||||
pass
|
||||
|
||||
def _traverse_for(self, context, revision, directive):
|
||||
directives = list(self._rewrite(context, revision, directive))
|
||||
for directive in directives:
|
||||
traverser = self._traverse.dispatch(directive)
|
||||
traverser(self, context, revision, directive)
|
||||
return directives
|
||||
|
||||
def _traverse_list(self, context, revision, directives):
|
||||
dest = []
|
||||
for directive in directives:
|
||||
dest.extend(self._traverse_for(context, revision, directive))
|
||||
|
||||
directives[:] = dest
|
||||
|
||||
def process_revision_directives(self, context, revision, directives):
|
||||
self._traverse_list(context, revision, directives)
|
|
@ -1,530 +0,0 @@
|
|||
import os
|
||||
|
||||
from .script import ScriptDirectory
|
||||
from .runtime.environment import EnvironmentContext
|
||||
from . import util
|
||||
from . import autogenerate as autogen
|
||||
|
||||
|
||||
def list_templates(config):
|
||||
"""List available templates
|
||||
|
||||
:param config: a :class:`.Config` object.
|
||||
|
||||
"""
|
||||
|
||||
config.print_stdout("Available templates:\n")
|
||||
for tempname in os.listdir(config.get_template_directory()):
|
||||
with open(os.path.join(
|
||||
config.get_template_directory(),
|
||||
tempname,
|
||||
'README')) as readme:
|
||||
synopsis = next(readme)
|
||||
config.print_stdout("%s - %s", tempname, synopsis)
|
||||
|
||||
config.print_stdout("\nTemplates are used via the 'init' command, e.g.:")
|
||||
config.print_stdout("\n alembic init --template generic ./scripts")
|
||||
|
||||
|
||||
def init(config, directory, template='generic'):
|
||||
"""Initialize a new scripts directory.
|
||||
|
||||
:param config: a :class:`.Config` object.
|
||||
|
||||
:param directory: string path of the target directory
|
||||
|
||||
:param template: string name of the migration environment template to
|
||||
use.
|
||||
|
||||
"""
|
||||
|
||||
if os.access(directory, os.F_OK):
|
||||
raise util.CommandError("Directory %s already exists" % directory)
|
||||
|
||||
template_dir = os.path.join(config.get_template_directory(),
|
||||
template)
|
||||
if not os.access(template_dir, os.F_OK):
|
||||
raise util.CommandError("No such template %r" % template)
|
||||
|
||||
util.status("Creating directory %s" % os.path.abspath(directory),
|
||||
os.makedirs, directory)
|
||||
|
||||
versions = os.path.join(directory, 'versions')
|
||||
util.status("Creating directory %s" % os.path.abspath(versions),
|
||||
os.makedirs, versions)
|
||||
|
||||
script = ScriptDirectory(directory)
|
||||
|
||||
for file_ in os.listdir(template_dir):
|
||||
file_path = os.path.join(template_dir, file_)
|
||||
if file_ == 'alembic.ini.mako':
|
||||
config_file = os.path.abspath(config.config_file_name)
|
||||
if os.access(config_file, os.F_OK):
|
||||
util.msg("File %s already exists, skipping" % config_file)
|
||||
else:
|
||||
script._generate_template(
|
||||
file_path,
|
||||
config_file,
|
||||
script_location=directory
|
||||
)
|
||||
elif os.path.isfile(file_path):
|
||||
output_file = os.path.join(directory, file_)
|
||||
script._copy_file(
|
||||
file_path,
|
||||
output_file
|
||||
)
|
||||
|
||||
util.msg("Please edit configuration/connection/logging "
|
||||
"settings in %r before proceeding." % config_file)
|
||||
|
||||
|
||||
def revision(
|
||||
config, message=None, autogenerate=False, sql=False,
|
||||
head="head", splice=False, branch_label=None,
|
||||
version_path=None, rev_id=None, depends_on=None,
|
||||
process_revision_directives=None):
|
||||
"""Create a new revision file.
|
||||
|
||||
:param config: a :class:`.Config` object.
|
||||
|
||||
:param message: string message to apply to the revision; this is the
|
||||
``-m`` option to ``alembic revision``.
|
||||
|
||||
:param autogenerate: whether or not to autogenerate the script from
|
||||
the database; this is the ``--autogenerate`` option to ``alembic revision``.
|
||||
|
||||
:param sql: whether to dump the script out as a SQL string; when specified,
|
||||
the script is dumped to stdout. This is the ``--sql`` option to
|
||||
``alembic revision``.
|
||||
|
||||
:param head: head revision to build the new revision upon as a parent;
|
||||
this is the ``--head`` option to ``alembic revision``.
|
||||
|
||||
:param splice: whether or not the new revision should be made into a
|
||||
new head of its own; is required when the given ``head`` is not itself
|
||||
a head. This is the ``--splice`` option to ``alembic revision``.
|
||||
|
||||
:param branch_label: string label to apply to the branch; this is the
|
||||
``--branch-label`` option to ``alembic revision``.
|
||||
|
||||
:param version_path: string symbol identifying a specific version path
|
||||
from the configuration; this is the ``--version-path`` option to
|
||||
``alembic revision``.
|
||||
|
||||
:param rev_id: optional revision identifier to use instead of having
|
||||
one generated; this is the ``--rev-id`` option to ``alembic revision``.
|
||||
|
||||
:param depends_on: optional list of "depends on" identifiers; this is the
|
||||
``--depends-on`` option to ``alembic revision``.
|
||||
|
||||
:param process_revision_directives: this is a callable that takes the
|
||||
same form as the callable described at
|
||||
:paramref:`.EnvironmentContext.configure.process_revision_directives`;
|
||||
will be applied to the structure generated by the revision process
|
||||
where it can be altered programmatically. Note that unlike all
|
||||
the other parameters, this option is only available via programmatic
|
||||
use of :func:`.command.revision`
|
||||
|
||||
.. versionadded:: 0.9.0
|
||||
|
||||
"""
|
||||
|
||||
script_directory = ScriptDirectory.from_config(config)
|
||||
|
||||
command_args = dict(
|
||||
message=message,
|
||||
autogenerate=autogenerate,
|
||||
sql=sql, head=head, splice=splice, branch_label=branch_label,
|
||||
version_path=version_path, rev_id=rev_id, depends_on=depends_on
|
||||
)
|
||||
revision_context = autogen.RevisionContext(
|
||||
config, script_directory, command_args,
|
||||
process_revision_directives=process_revision_directives)
|
||||
|
||||
environment = util.asbool(
|
||||
config.get_main_option("revision_environment")
|
||||
)
|
||||
|
||||
if autogenerate:
|
||||
environment = True
|
||||
|
||||
if sql:
|
||||
raise util.CommandError(
|
||||
"Using --sql with --autogenerate does not make any sense")
|
||||
|
||||
def retrieve_migrations(rev, context):
|
||||
revision_context.run_autogenerate(rev, context)
|
||||
return []
|
||||
elif environment:
|
||||
def retrieve_migrations(rev, context):
|
||||
revision_context.run_no_autogenerate(rev, context)
|
||||
return []
|
||||
elif sql:
|
||||
raise util.CommandError(
|
||||
"Using --sql with the revision command when "
|
||||
"revision_environment is not configured does not make any sense")
|
||||
|
||||
if environment:
|
||||
with EnvironmentContext(
|
||||
config,
|
||||
script_directory,
|
||||
fn=retrieve_migrations,
|
||||
as_sql=sql,
|
||||
template_args=revision_context.template_args,
|
||||
revision_context=revision_context
|
||||
):
|
||||
script_directory.run_env()
|
||||
|
||||
scripts = [
|
||||
script for script in
|
||||
revision_context.generate_scripts()
|
||||
]
|
||||
if len(scripts) == 1:
|
||||
return scripts[0]
|
||||
else:
|
||||
return scripts
|
||||
|
||||
|
||||
def merge(config, revisions, message=None, branch_label=None, rev_id=None):
|
||||
"""Merge two revisions together. Creates a new migration file.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
:param config: a :class:`.Config` instance
|
||||
|
||||
:param message: string message to apply to the revision
|
||||
|
||||
:param branch_label: string label name to apply to the new revision
|
||||
|
||||
:param rev_id: hardcoded revision identifier instead of generating a new
|
||||
one.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`branches`
|
||||
|
||||
"""
|
||||
|
||||
script = ScriptDirectory.from_config(config)
|
||||
template_args = {
|
||||
'config': config # Let templates use config for
|
||||
# e.g. multiple databases
|
||||
}
|
||||
return script.generate_revision(
|
||||
rev_id or util.rev_id(), message, refresh=True,
|
||||
head=revisions, branch_labels=branch_label,
|
||||
**template_args)
|
||||
|
||||
|
||||
def upgrade(config, revision, sql=False, tag=None):
|
||||
"""Upgrade to a later version.
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
|
||||
:param revision: string revision target or range for --sql mode
|
||||
|
||||
:param sql: if True, use ``--sql`` mode
|
||||
|
||||
:param tag: an arbitrary "tag" that can be intercepted by custom
|
||||
``env.py`` scripts via the :meth:`.EnvironmentContext.get_tag_argument`
|
||||
method.
|
||||
|
||||
"""
|
||||
|
||||
script = ScriptDirectory.from_config(config)
|
||||
|
||||
starting_rev = None
|
||||
if ":" in revision:
|
||||
if not sql:
|
||||
raise util.CommandError("Range revision not allowed")
|
||||
starting_rev, revision = revision.split(':', 2)
|
||||
|
||||
def upgrade(rev, context):
|
||||
return script._upgrade_revs(revision, rev)
|
||||
|
||||
with EnvironmentContext(
|
||||
config,
|
||||
script,
|
||||
fn=upgrade,
|
||||
as_sql=sql,
|
||||
starting_rev=starting_rev,
|
||||
destination_rev=revision,
|
||||
tag=tag
|
||||
):
|
||||
script.run_env()
|
||||
|
||||
|
||||
def downgrade(config, revision, sql=False, tag=None):
|
||||
"""Revert to a previous version.
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
|
||||
:param revision: string revision target or range for --sql mode
|
||||
|
||||
:param sql: if True, use ``--sql`` mode
|
||||
|
||||
:param tag: an arbitrary "tag" that can be intercepted by custom
|
||||
``env.py`` scripts via the :meth:`.EnvironmentContext.get_tag_argument`
|
||||
method.
|
||||
|
||||
"""
|
||||
|
||||
script = ScriptDirectory.from_config(config)
|
||||
starting_rev = None
|
||||
if ":" in revision:
|
||||
if not sql:
|
||||
raise util.CommandError("Range revision not allowed")
|
||||
starting_rev, revision = revision.split(':', 2)
|
||||
elif sql:
|
||||
raise util.CommandError(
|
||||
"downgrade with --sql requires <fromrev>:<torev>")
|
||||
|
||||
def downgrade(rev, context):
|
||||
return script._downgrade_revs(revision, rev)
|
||||
|
||||
with EnvironmentContext(
|
||||
config,
|
||||
script,
|
||||
fn=downgrade,
|
||||
as_sql=sql,
|
||||
starting_rev=starting_rev,
|
||||
destination_rev=revision,
|
||||
tag=tag
|
||||
):
|
||||
script.run_env()
|
||||
|
||||
|
||||
def show(config, rev):
|
||||
"""Show the revision(s) denoted by the given symbol.
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
|
||||
:param revision: string revision target
|
||||
|
||||
"""
|
||||
|
||||
script = ScriptDirectory.from_config(config)
|
||||
|
||||
if rev == "current":
|
||||
def show_current(rev, context):
|
||||
for sc in script.get_revisions(rev):
|
||||
config.print_stdout(sc.log_entry)
|
||||
return []
|
||||
with EnvironmentContext(
|
||||
config,
|
||||
script,
|
||||
fn=show_current
|
||||
):
|
||||
script.run_env()
|
||||
else:
|
||||
for sc in script.get_revisions(rev):
|
||||
config.print_stdout(sc.log_entry)
|
||||
|
||||
|
||||
def history(config, rev_range=None, verbose=False):
|
||||
"""List changeset scripts in chronological order.
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
|
||||
:param rev_range: string revision range
|
||||
|
||||
:param verbose: output in verbose mode.
|
||||
|
||||
"""
|
||||
|
||||
script = ScriptDirectory.from_config(config)
|
||||
if rev_range is not None:
|
||||
if ":" not in rev_range:
|
||||
raise util.CommandError(
|
||||
"History range requires [start]:[end], "
|
||||
"[start]:, or :[end]")
|
||||
base, head = rev_range.strip().split(":")
|
||||
else:
|
||||
base = head = None
|
||||
|
||||
def _display_history(config, script, base, head):
|
||||
for sc in script.walk_revisions(
|
||||
base=base or "base",
|
||||
head=head or "heads"):
|
||||
config.print_stdout(
|
||||
sc.cmd_format(
|
||||
verbose=verbose, include_branches=True,
|
||||
include_doc=True, include_parents=True))
|
||||
|
||||
def _display_history_w_current(config, script, base=None, head=None):
|
||||
def _display_current_history(rev, context):
|
||||
if head is None:
|
||||
_display_history(config, script, base, rev)
|
||||
elif base is None:
|
||||
_display_history(config, script, rev, head)
|
||||
return []
|
||||
|
||||
with EnvironmentContext(
|
||||
config,
|
||||
script,
|
||||
fn=_display_current_history
|
||||
):
|
||||
script.run_env()
|
||||
|
||||
if base == "current":
|
||||
_display_history_w_current(config, script, head=head)
|
||||
elif head == "current":
|
||||
_display_history_w_current(config, script, base=base)
|
||||
else:
|
||||
_display_history(config, script, base, head)
|
||||
|
||||
|
||||
def heads(config, verbose=False, resolve_dependencies=False):
|
||||
"""Show current available heads in the script directory
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
|
||||
:param verbose: output in verbose mode.
|
||||
|
||||
:param resolve_dependencies: treat dependency version as down revisions.
|
||||
|
||||
"""
|
||||
|
||||
script = ScriptDirectory.from_config(config)
|
||||
if resolve_dependencies:
|
||||
heads = script.get_revisions("heads")
|
||||
else:
|
||||
heads = script.get_revisions(script.get_heads())
|
||||
|
||||
for rev in heads:
|
||||
config.print_stdout(
|
||||
rev.cmd_format(
|
||||
verbose, include_branches=True, tree_indicators=False))
|
||||
|
||||
|
||||
def branches(config, verbose=False):
|
||||
"""Show current branch points.
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
|
||||
:param verbose: output in verbose mode.
|
||||
|
||||
"""
|
||||
script = ScriptDirectory.from_config(config)
|
||||
for sc in script.walk_revisions():
|
||||
if sc.is_branch_point:
|
||||
config.print_stdout(
|
||||
"%s\n%s\n",
|
||||
sc.cmd_format(verbose, include_branches=True),
|
||||
"\n".join(
|
||||
"%s -> %s" % (
|
||||
" " * len(str(sc.revision)),
|
||||
rev_obj.cmd_format(
|
||||
False, include_branches=True, include_doc=verbose)
|
||||
) for rev_obj in
|
||||
(script.get_revision(rev) for rev in sc.nextrev)
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def current(config, verbose=False, head_only=False):
|
||||
"""Display the current revision for a database.
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
|
||||
:param verbose: output in verbose mode.
|
||||
|
||||
:param head_only: deprecated; use ``verbose`` for additional output.
|
||||
|
||||
"""
|
||||
|
||||
script = ScriptDirectory.from_config(config)
|
||||
|
||||
if head_only:
|
||||
util.warn("--head-only is deprecated")
|
||||
|
||||
def display_version(rev, context):
|
||||
if verbose:
|
||||
config.print_stdout(
|
||||
"Current revision(s) for %s:",
|
||||
util.obfuscate_url_pw(context.connection.engine.url)
|
||||
)
|
||||
for rev in script.get_all_current(rev):
|
||||
config.print_stdout(rev.cmd_format(verbose))
|
||||
|
||||
return []
|
||||
|
||||
with EnvironmentContext(
|
||||
config,
|
||||
script,
|
||||
fn=display_version
|
||||
):
|
||||
script.run_env()
|
||||
|
||||
|
||||
def stamp(config, revision, sql=False, tag=None):
|
||||
"""'stamp' the revision table with the given revision; don't
|
||||
run any migrations.
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
|
||||
:param revision: target revision.
|
||||
|
||||
:param sql: use ``--sql`` mode
|
||||
|
||||
:param tag: an arbitrary "tag" that can be intercepted by custom
|
||||
``env.py`` scripts via the :class:`.EnvironmentContext.get_tag_argument`
|
||||
method.
|
||||
|
||||
"""
|
||||
|
||||
script = ScriptDirectory.from_config(config)
|
||||
|
||||
starting_rev = None
|
||||
if ":" in revision:
|
||||
if not sql:
|
||||
raise util.CommandError("Range revision not allowed")
|
||||
starting_rev, revision = revision.split(':', 2)
|
||||
|
||||
def do_stamp(rev, context):
|
||||
return script._stamp_revs(revision, rev)
|
||||
|
||||
with EnvironmentContext(
|
||||
config,
|
||||
script,
|
||||
fn=do_stamp,
|
||||
as_sql=sql,
|
||||
destination_rev=revision,
|
||||
starting_rev=starting_rev,
|
||||
tag=tag
|
||||
):
|
||||
script.run_env()
|
||||
|
||||
|
||||
def edit(config, rev):
|
||||
"""Edit revision script(s) using $EDITOR.
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
|
||||
:param rev: target revision.
|
||||
|
||||
"""
|
||||
|
||||
script = ScriptDirectory.from_config(config)
|
||||
|
||||
if rev == "current":
|
||||
def edit_current(rev, context):
|
||||
if not rev:
|
||||
raise util.CommandError("No current revisions")
|
||||
for sc in script.get_revisions(rev):
|
||||
util.edit(sc.path)
|
||||
return []
|
||||
with EnvironmentContext(
|
||||
config,
|
||||
script,
|
||||
fn=edit_current
|
||||
):
|
||||
script.run_env()
|
||||
else:
|
||||
revs = script.get_revisions(rev)
|
||||
if not revs:
|
||||
raise util.CommandError(
|
||||
"No revision files indicated by symbol '%s'" % rev)
|
||||
for sc in revs:
|
||||
util.edit(sc.path)
|
||||
|
|
@ -1,482 +0,0 @@
|
|||
from argparse import ArgumentParser
|
||||
from .util.compat import SafeConfigParser
|
||||
import inspect
|
||||
import os
|
||||
import sys
|
||||
|
||||
from . import command
|
||||
from . import util
|
||||
from . import package_dir
|
||||
from .util import compat
|
||||
|
||||
|
||||
class Config(object):
|
||||
|
||||
"""Represent an Alembic configuration.
|
||||
|
||||
Within an ``env.py`` script, this is available
|
||||
via the :attr:`.EnvironmentContext.config` attribute,
|
||||
which in turn is available at ``alembic.context``::
|
||||
|
||||
from alembic import context
|
||||
|
||||
some_param = context.config.get_main_option("my option")
|
||||
|
||||
When invoking Alembic programatically, a new
|
||||
:class:`.Config` can be created by passing
|
||||
the name of an .ini file to the constructor::
|
||||
|
||||
from alembic.config import Config
|
||||
alembic_cfg = Config("/path/to/yourapp/alembic.ini")
|
||||
|
||||
With a :class:`.Config` object, you can then
|
||||
run Alembic commands programmatically using the directives
|
||||
in :mod:`alembic.command`.
|
||||
|
||||
The :class:`.Config` object can also be constructed without
|
||||
a filename. Values can be set programmatically, and
|
||||
new sections will be created as needed::
|
||||
|
||||
from alembic.config import Config
|
||||
alembic_cfg = Config()
|
||||
alembic_cfg.set_main_option("script_location", "myapp:migrations")
|
||||
alembic_cfg.set_main_option("url", "postgresql://foo/bar")
|
||||
alembic_cfg.set_section_option("mysection", "foo", "bar")
|
||||
|
||||
.. warning::
|
||||
|
||||
When using programmatic configuration, make sure the
|
||||
``env.py`` file in use is compatible with the target configuration;
|
||||
including that the call to Python ``logging.fileConfig()`` is
|
||||
omitted if the programmatic configuration doesn't actually include
|
||||
logging directives.
|
||||
|
||||
For passing non-string values to environments, such as connections and
|
||||
engines, use the :attr:`.Config.attributes` dictionary::
|
||||
|
||||
with engine.begin() as connection:
|
||||
alembic_cfg.attributes['connection'] = connection
|
||||
command.upgrade(alembic_cfg, "head")
|
||||
|
||||
:param file_: name of the .ini file to open.
|
||||
:param ini_section: name of the main Alembic section within the
|
||||
.ini file
|
||||
:param output_buffer: optional file-like input buffer which
|
||||
will be passed to the :class:`.MigrationContext` - used to redirect
|
||||
the output of "offline generation" when using Alembic programmatically.
|
||||
:param stdout: buffer where the "print" output of commands will be sent.
|
||||
Defaults to ``sys.stdout``.
|
||||
|
||||
.. versionadded:: 0.4
|
||||
|
||||
:param config_args: A dictionary of keys and values that will be used
|
||||
for substitution in the alembic config file. The dictionary as given
|
||||
is **copied** to a new one, stored locally as the attribute
|
||||
``.config_args``. When the :attr:`.Config.file_config` attribute is
|
||||
first invoked, the replacement variable ``here`` will be added to this
|
||||
dictionary before the dictionary is passed to ``SafeConfigParser()``
|
||||
to parse the .ini file.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
:param attributes: optional dictionary of arbitrary Python keys/values,
|
||||
which will be populated into the :attr:`.Config.attributes` dictionary.
|
||||
|
||||
.. versionadded:: 0.7.5
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`connection_sharing`
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, file_=None, ini_section='alembic', output_buffer=None,
|
||||
stdout=sys.stdout, cmd_opts=None,
|
||||
config_args=util.immutabledict(), attributes=None):
|
||||
"""Construct a new :class:`.Config`
|
||||
|
||||
"""
|
||||
self.config_file_name = file_
|
||||
self.config_ini_section = ini_section
|
||||
self.output_buffer = output_buffer
|
||||
self.stdout = stdout
|
||||
self.cmd_opts = cmd_opts
|
||||
self.config_args = dict(config_args)
|
||||
if attributes:
|
||||
self.attributes.update(attributes)
|
||||
|
||||
cmd_opts = None
|
||||
"""The command-line options passed to the ``alembic`` script.
|
||||
|
||||
Within an ``env.py`` script this can be accessed via the
|
||||
:attr:`.EnvironmentContext.config` attribute.
|
||||
|
||||
.. versionadded:: 0.6.0
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.EnvironmentContext.get_x_argument`
|
||||
|
||||
"""
|
||||
|
||||
config_file_name = None
|
||||
"""Filesystem path to the .ini file in use."""
|
||||
|
||||
config_ini_section = None
|
||||
"""Name of the config file section to read basic configuration
|
||||
from. Defaults to ``alembic``, that is the ``[alembic]`` section
|
||||
of the .ini file. This value is modified using the ``-n/--name``
|
||||
option to the Alembic runnier.
|
||||
|
||||
"""
|
||||
|
||||
@util.memoized_property
|
||||
def attributes(self):
|
||||
"""A Python dictionary for storage of additional state.
|
||||
|
||||
|
||||
This is a utility dictionary which can include not just strings but
|
||||
engines, connections, schema objects, or anything else.
|
||||
Use this to pass objects into an env.py script, such as passing
|
||||
a :class:`sqlalchemy.engine.base.Connection` when calling
|
||||
commands from :mod:`alembic.command` programmatically.
|
||||
|
||||
.. versionadded:: 0.7.5
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`connection_sharing`
|
||||
|
||||
:paramref:`.Config.attributes`
|
||||
|
||||
"""
|
||||
return {}
|
||||
|
||||
def print_stdout(self, text, *arg):
|
||||
"""Render a message to standard out."""
|
||||
|
||||
util.write_outstream(
|
||||
self.stdout,
|
||||
(compat.text_type(text) % arg),
|
||||
"\n"
|
||||
)
|
||||
|
||||
@util.memoized_property
|
||||
def file_config(self):
|
||||
"""Return the underlying ``ConfigParser`` object.
|
||||
|
||||
Direct access to the .ini file is available here,
|
||||
though the :meth:`.Config.get_section` and
|
||||
:meth:`.Config.get_main_option`
|
||||
methods provide a possibly simpler interface.
|
||||
|
||||
"""
|
||||
|
||||
if self.config_file_name:
|
||||
here = os.path.abspath(os.path.dirname(self.config_file_name))
|
||||
else:
|
||||
here = ""
|
||||
self.config_args['here'] = here
|
||||
file_config = SafeConfigParser(self.config_args)
|
||||
if self.config_file_name:
|
||||
file_config.read([self.config_file_name])
|
||||
else:
|
||||
file_config.add_section(self.config_ini_section)
|
||||
return file_config
|
||||
|
||||
def get_template_directory(self):
|
||||
"""Return the directory where Alembic setup templates are found.
|
||||
|
||||
This method is used by the alembic ``init`` and ``list_templates``
|
||||
commands.
|
||||
|
||||
"""
|
||||
return os.path.join(package_dir, 'templates')
|
||||
|
||||
def get_section(self, name):
|
||||
"""Return all the configuration options from a given .ini file section
|
||||
as a dictionary.
|
||||
|
||||
"""
|
||||
return dict(self.file_config.items(name))
|
||||
|
||||
def set_main_option(self, name, value):
|
||||
"""Set an option programmatically within the 'main' section.
|
||||
|
||||
This overrides whatever was in the .ini file.
|
||||
|
||||
:param name: name of the value
|
||||
|
||||
:param value: the value. Note that this value is passed to
|
||||
``ConfigParser.set``, which supports variable interpolation using
|
||||
pyformat (e.g. ``%(some_value)s``). A raw percent sign not part of
|
||||
an interpolation symbol must therefore be escaped, e.g. ``%%``.
|
||||
The given value may refer to another value already in the file
|
||||
using the interpolation format.
|
||||
|
||||
"""
|
||||
self.set_section_option(self.config_ini_section, name, value)
|
||||
|
||||
def remove_main_option(self, name):
|
||||
self.file_config.remove_option(self.config_ini_section, name)
|
||||
|
||||
def set_section_option(self, section, name, value):
|
||||
"""Set an option programmatically within the given section.
|
||||
|
||||
The section is created if it doesn't exist already.
|
||||
The value here will override whatever was in the .ini
|
||||
file.
|
||||
|
||||
:param section: name of the section
|
||||
|
||||
:param name: name of the value
|
||||
|
||||
:param value: the value. Note that this value is passed to
|
||||
``ConfigParser.set``, which supports variable interpolation using
|
||||
pyformat (e.g. ``%(some_value)s``). A raw percent sign not part of
|
||||
an interpolation symbol must therefore be escaped, e.g. ``%%``.
|
||||
The given value may refer to another value already in the file
|
||||
using the interpolation format.
|
||||
|
||||
"""
|
||||
|
||||
if not self.file_config.has_section(section):
|
||||
self.file_config.add_section(section)
|
||||
self.file_config.set(section, name, value)
|
||||
|
||||
def get_section_option(self, section, name, default=None):
|
||||
"""Return an option from the given section of the .ini file.
|
||||
|
||||
"""
|
||||
if not self.file_config.has_section(section):
|
||||
raise util.CommandError("No config file %r found, or file has no "
|
||||
"'[%s]' section" %
|
||||
(self.config_file_name, section))
|
||||
if self.file_config.has_option(section, name):
|
||||
return self.file_config.get(section, name)
|
||||
else:
|
||||
return default
|
||||
|
||||
def get_main_option(self, name, default=None):
|
||||
"""Return an option from the 'main' section of the .ini file.
|
||||
|
||||
This defaults to being a key from the ``[alembic]``
|
||||
section, unless the ``-n/--name`` flag were used to
|
||||
indicate a different section.
|
||||
|
||||
"""
|
||||
return self.get_section_option(self.config_ini_section, name, default)
|
||||
|
||||
|
||||
class CommandLine(object):
|
||||
|
||||
def __init__(self, prog=None):
|
||||
self._generate_args(prog)
|
||||
|
||||
def _generate_args(self, prog):
|
||||
def add_options(parser, positional, kwargs):
|
||||
kwargs_opts = {
|
||||
'template': (
|
||||
"-t", "--template",
|
||||
dict(
|
||||
default='generic',
|
||||
type=str,
|
||||
help="Setup template for use with 'init'"
|
||||
)
|
||||
),
|
||||
'message': (
|
||||
"-m", "--message",
|
||||
dict(
|
||||
type=str,
|
||||
help="Message string to use with 'revision'")
|
||||
),
|
||||
'sql': (
|
||||
"--sql",
|
||||
dict(
|
||||
action="store_true",
|
||||
help="Don't emit SQL to database - dump to "
|
||||
"standard output/file instead"
|
||||
)
|
||||
),
|
||||
'tag': (
|
||||
"--tag",
|
||||
dict(
|
||||
type=str,
|
||||
help="Arbitrary 'tag' name - can be used by "
|
||||
"custom env.py scripts.")
|
||||
),
|
||||
'head': (
|
||||
"--head",
|
||||
dict(
|
||||
type=str,
|
||||
help="Specify head revision or <branchname>@head "
|
||||
"to base new revision on."
|
||||
)
|
||||
),
|
||||
'splice': (
|
||||
"--splice",
|
||||
dict(
|
||||
action="store_true",
|
||||
help="Allow a non-head revision as the "
|
||||
"'head' to splice onto"
|
||||
)
|
||||
),
|
||||
'depends_on': (
|
||||
"--depends-on",
|
||||
dict(
|
||||
action="append",
|
||||
help="Specify one or more revision identifiers "
|
||||
"which this revision should depend on."
|
||||
)
|
||||
),
|
||||
'rev_id': (
|
||||
"--rev-id",
|
||||
dict(
|
||||
type=str,
|
||||
help="Specify a hardcoded revision id instead of "
|
||||
"generating one"
|
||||
)
|
||||
),
|
||||
'version_path': (
|
||||
"--version-path",
|
||||
dict(
|
||||
type=str,
|
||||
help="Specify specific path from config for "
|
||||
"version file"
|
||||
)
|
||||
),
|
||||
'branch_label': (
|
||||
"--branch-label",
|
||||
dict(
|
||||
type=str,
|
||||
help="Specify a branch label to apply to the "
|
||||
"new revision"
|
||||
)
|
||||
),
|
||||
'verbose': (
|
||||
"-v", "--verbose",
|
||||
dict(
|
||||
action="store_true",
|
||||
help="Use more verbose output"
|
||||
)
|
||||
),
|
||||
'resolve_dependencies': (
|
||||
'--resolve-dependencies',
|
||||
dict(
|
||||
action="store_true",
|
||||
help="Treat dependency versions as down revisions"
|
||||
)
|
||||
),
|
||||
'autogenerate': (
|
||||
"--autogenerate",
|
||||
dict(
|
||||
action="store_true",
|
||||
help="Populate revision script with candidate "
|
||||
"migration operations, based on comparison "
|
||||
"of database to model.")
|
||||
),
|
||||
'head_only': (
|
||||
"--head-only",
|
||||
dict(
|
||||
action="store_true",
|
||||
help="Deprecated. Use --verbose for "
|
||||
"additional output")
|
||||
),
|
||||
'rev_range': (
|
||||
"-r", "--rev-range",
|
||||
dict(
|
||||
action="store",
|
||||
help="Specify a revision range; "
|
||||
"format is [start]:[end]")
|
||||
)
|
||||
}
|
||||
positional_help = {
|
||||
'directory': "location of scripts directory",
|
||||
'revision': "revision identifier",
|
||||
'revisions': "one or more revisions, or 'heads' for all heads"
|
||||
|
||||
}
|
||||
for arg in kwargs:
|
||||
if arg in kwargs_opts:
|
||||
args = kwargs_opts[arg]
|
||||
args, kw = args[0:-1], args[-1]
|
||||
parser.add_argument(*args, **kw)
|
||||
|
||||
for arg in positional:
|
||||
if arg == "revisions":
|
||||
subparser.add_argument(
|
||||
arg, nargs='+', help=positional_help.get(arg))
|
||||
else:
|
||||
subparser.add_argument(arg, help=positional_help.get(arg))
|
||||
|
||||
parser = ArgumentParser(prog=prog)
|
||||
parser.add_argument("-c", "--config",
|
||||
type=str,
|
||||
default="alembic.ini",
|
||||
help="Alternate config file")
|
||||
parser.add_argument("-n", "--name",
|
||||
type=str,
|
||||
default="alembic",
|
||||
help="Name of section in .ini file to "
|
||||
"use for Alembic config")
|
||||
parser.add_argument("-x", action="append",
|
||||
help="Additional arguments consumed by "
|
||||
"custom env.py scripts, e.g. -x "
|
||||
"setting1=somesetting -x setting2=somesetting")
|
||||
parser.add_argument("--raiseerr", action="store_true",
|
||||
help="Raise a full stack trace on error")
|
||||
subparsers = parser.add_subparsers()
|
||||
|
||||
for fn in [getattr(command, n) for n in dir(command)]:
|
||||
if inspect.isfunction(fn) and \
|
||||
fn.__name__[0] != '_' and \
|
||||
fn.__module__ == 'alembic.command':
|
||||
|
||||
spec = inspect.getargspec(fn)
|
||||
if spec[3]:
|
||||
positional = spec[0][1:-len(spec[3])]
|
||||
kwarg = spec[0][-len(spec[3]):]
|
||||
else:
|
||||
positional = spec[0][1:]
|
||||
kwarg = []
|
||||
|
||||
subparser = subparsers.add_parser(
|
||||
fn.__name__,
|
||||
help=fn.__doc__)
|
||||
add_options(subparser, positional, kwarg)
|
||||
subparser.set_defaults(cmd=(fn, positional, kwarg))
|
||||
self.parser = parser
|
||||
|
||||
def run_cmd(self, config, options):
|
||||
fn, positional, kwarg = options.cmd
|
||||
|
||||
try:
|
||||
fn(config,
|
||||
*[getattr(options, k, None) for k in positional],
|
||||
**dict((k, getattr(options, k, None)) for k in kwarg)
|
||||
)
|
||||
except util.CommandError as e:
|
||||
if options.raiseerr:
|
||||
raise
|
||||
else:
|
||||
util.err(str(e))
|
||||
|
||||
def main(self, argv=None):
|
||||
options = self.parser.parse_args(argv)
|
||||
if not hasattr(options, "cmd"):
|
||||
# see http://bugs.python.org/issue9253, argparse
|
||||
# behavior changed incompatibly in py3.3
|
||||
self.parser.error("too few arguments")
|
||||
else:
|
||||
cfg = Config(file_=options.config,
|
||||
ini_section=options.name, cmd_opts=options)
|
||||
self.run_cmd(cfg, options)
|
||||
|
||||
|
||||
def main(argv=None, prog=None, **kwargs):
|
||||
"""The console runner function for Alembic."""
|
||||
|
||||
CommandLine(prog=prog).main(argv=argv)
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
|
@ -1,5 +0,0 @@
|
|||
from .runtime.environment import EnvironmentContext
|
||||
|
||||
# create proxy functions for
|
||||
# each method on the EnvironmentContext class.
|
||||
EnvironmentContext.create_module_class_proxy(globals(), locals())
|
|
@ -1,2 +0,0 @@
|
|||
from . import postgresql, mysql, sqlite, mssql, oracle # pragma: no cover
|
||||
from .impl import DefaultImpl # pragma: no cover
|
|
@ -1,204 +0,0 @@
|
|||
import functools
|
||||
|
||||
from sqlalchemy.ext.compiler import compiles
|
||||
from sqlalchemy.schema import DDLElement, Column
|
||||
from sqlalchemy import Integer
|
||||
from sqlalchemy import types as sqltypes
|
||||
from .. import util
|
||||
|
||||
# backwards compat
|
||||
from ..util.sqla_compat import ( # noqa
|
||||
_table_for_constraint,
|
||||
_columns_for_constraint, _fk_spec, _is_type_bound, _find_columns)
|
||||
|
||||
if util.sqla_09:
|
||||
from sqlalchemy.sql.elements import quoted_name
|
||||
|
||||
|
||||
class AlterTable(DDLElement):
|
||||
|
||||
"""Represent an ALTER TABLE statement.
|
||||
|
||||
Only the string name and optional schema name of the table
|
||||
is required, not a full Table object.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, table_name, schema=None):
|
||||
self.table_name = table_name
|
||||
self.schema = schema
|
||||
|
||||
|
||||
class RenameTable(AlterTable):
|
||||
|
||||
def __init__(self, old_table_name, new_table_name, schema=None):
|
||||
super(RenameTable, self).__init__(old_table_name, schema=schema)
|
||||
self.new_table_name = new_table_name
|
||||
|
||||
|
||||
class AlterColumn(AlterTable):
|
||||
|
||||
def __init__(self, name, column_name, schema=None,
|
||||
existing_type=None,
|
||||
existing_nullable=None,
|
||||
existing_server_default=None):
|
||||
super(AlterColumn, self).__init__(name, schema=schema)
|
||||
self.column_name = column_name
|
||||
self.existing_type = sqltypes.to_instance(existing_type) \
|
||||
if existing_type is not None else None
|
||||
self.existing_nullable = existing_nullable
|
||||
self.existing_server_default = existing_server_default
|
||||
|
||||
|
||||
class ColumnNullable(AlterColumn):
|
||||
|
||||
def __init__(self, name, column_name, nullable, **kw):
|
||||
super(ColumnNullable, self).__init__(name, column_name,
|
||||
**kw)
|
||||
self.nullable = nullable
|
||||
|
||||
|
||||
class ColumnType(AlterColumn):
|
||||
|
||||
def __init__(self, name, column_name, type_, **kw):
|
||||
super(ColumnType, self).__init__(name, column_name,
|
||||
**kw)
|
||||
self.type_ = sqltypes.to_instance(type_)
|
||||
|
||||
|
||||
class ColumnName(AlterColumn):
|
||||
|
||||
def __init__(self, name, column_name, newname, **kw):
|
||||
super(ColumnName, self).__init__(name, column_name, **kw)
|
||||
self.newname = newname
|
||||
|
||||
|
||||
class ColumnDefault(AlterColumn):
|
||||
|
||||
def __init__(self, name, column_name, default, **kw):
|
||||
super(ColumnDefault, self).__init__(name, column_name, **kw)
|
||||
self.default = default
|
||||
|
||||
|
||||
class AddColumn(AlterTable):
|
||||
|
||||
def __init__(self, name, column, schema=None):
|
||||
super(AddColumn, self).__init__(name, schema=schema)
|
||||
self.column = column
|
||||
|
||||
|
||||
class DropColumn(AlterTable):
|
||||
|
||||
def __init__(self, name, column, schema=None):
|
||||
super(DropColumn, self).__init__(name, schema=schema)
|
||||
self.column = column
|
||||
|
||||
|
||||
@compiles(RenameTable)
|
||||
def visit_rename_table(element, compiler, **kw):
|
||||
return "%s RENAME TO %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
format_table_name(compiler, element.new_table_name, element.schema)
|
||||
)
|
||||
|
||||
|
||||
@compiles(AddColumn)
|
||||
def visit_add_column(element, compiler, **kw):
|
||||
return "%s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
add_column(compiler, element.column, **kw)
|
||||
)
|
||||
|
||||
|
||||
@compiles(DropColumn)
|
||||
def visit_drop_column(element, compiler, **kw):
|
||||
return "%s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
drop_column(compiler, element.column.name, **kw)
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnNullable)
|
||||
def visit_column_nullable(element, compiler, **kw):
|
||||
return "%s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
alter_column(compiler, element.column_name),
|
||||
"DROP NOT NULL" if element.nullable else "SET NOT NULL"
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnType)
|
||||
def visit_column_type(element, compiler, **kw):
|
||||
return "%s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
alter_column(compiler, element.column_name),
|
||||
"TYPE %s" % format_type(compiler, element.type_)
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnName)
|
||||
def visit_column_name(element, compiler, **kw):
|
||||
return "%s RENAME %s TO %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
format_column_name(compiler, element.column_name),
|
||||
format_column_name(compiler, element.newname)
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnDefault)
|
||||
def visit_column_default(element, compiler, **kw):
|
||||
return "%s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
alter_column(compiler, element.column_name),
|
||||
"SET DEFAULT %s" %
|
||||
format_server_default(compiler, element.default)
|
||||
if element.default is not None
|
||||
else "DROP DEFAULT"
|
||||
)
|
||||
|
||||
|
||||
def quote_dotted(name, quote):
|
||||
"""quote the elements of a dotted name"""
|
||||
|
||||
if util.sqla_09 and isinstance(name, quoted_name):
|
||||
return quote(name)
|
||||
result = '.'.join([quote(x) for x in name.split('.')])
|
||||
return result
|
||||
|
||||
|
||||
def format_table_name(compiler, name, schema):
|
||||
quote = functools.partial(compiler.preparer.quote, force=None)
|
||||
if schema:
|
||||
return quote_dotted(schema, quote) + "." + quote(name)
|
||||
else:
|
||||
return quote(name)
|
||||
|
||||
|
||||
def format_column_name(compiler, name):
|
||||
return compiler.preparer.quote(name, None)
|
||||
|
||||
|
||||
def format_server_default(compiler, default):
|
||||
return compiler.get_column_default_string(
|
||||
Column("x", Integer, server_default=default)
|
||||
)
|
||||
|
||||
|
||||
def format_type(compiler, type_):
|
||||
return compiler.dialect.type_compiler.process(type_)
|
||||
|
||||
|
||||
def alter_table(compiler, name, schema):
|
||||
return "ALTER TABLE %s" % format_table_name(compiler, name, schema)
|
||||
|
||||
|
||||
def drop_column(compiler, name):
|
||||
return 'DROP COLUMN %s' % format_column_name(compiler, name)
|
||||
|
||||
|
||||
def alter_column(compiler, name):
|
||||
return 'ALTER COLUMN %s' % format_column_name(compiler, name)
|
||||
|
||||
|
||||
def add_column(compiler, column, **kw):
|
||||
return "ADD COLUMN %s" % compiler.get_column_specification(column, **kw)
|
|
@ -1,370 +0,0 @@
|
|||
from sqlalchemy import schema, text
|
||||
from sqlalchemy import types as sqltypes
|
||||
|
||||
from ..util.compat import (
|
||||
string_types, text_type, with_metaclass
|
||||
)
|
||||
from ..util import sqla_compat
|
||||
from .. import util
|
||||
from . import base
|
||||
|
||||
|
||||
class ImplMeta(type):
|
||||
|
||||
def __init__(cls, classname, bases, dict_):
|
||||
newtype = type.__init__(cls, classname, bases, dict_)
|
||||
if '__dialect__' in dict_:
|
||||
_impls[dict_['__dialect__']] = cls
|
||||
return newtype
|
||||
|
||||
_impls = {}
|
||||
|
||||
|
||||
class DefaultImpl(with_metaclass(ImplMeta)):
|
||||
|
||||
"""Provide the entrypoint for major migration operations,
|
||||
including database-specific behavioral variances.
|
||||
|
||||
While individual SQL/DDL constructs already provide
|
||||
for database-specific implementations, variances here
|
||||
allow for entirely different sequences of operations
|
||||
to take place for a particular migration, such as
|
||||
SQL Server's special 'IDENTITY INSERT' step for
|
||||
bulk inserts.
|
||||
|
||||
"""
|
||||
__dialect__ = 'default'
|
||||
|
||||
transactional_ddl = False
|
||||
command_terminator = ";"
|
||||
|
||||
def __init__(self, dialect, connection, as_sql,
|
||||
transactional_ddl, output_buffer,
|
||||
context_opts):
|
||||
self.dialect = dialect
|
||||
self.connection = connection
|
||||
self.as_sql = as_sql
|
||||
self.literal_binds = context_opts.get('literal_binds', False)
|
||||
if self.literal_binds and not util.sqla_08:
|
||||
util.warn("'literal_binds' flag not supported in SQLAlchemy 0.7")
|
||||
self.literal_binds = False
|
||||
|
||||
self.output_buffer = output_buffer
|
||||
self.memo = {}
|
||||
self.context_opts = context_opts
|
||||
if transactional_ddl is not None:
|
||||
self.transactional_ddl = transactional_ddl
|
||||
|
||||
if self.literal_binds:
|
||||
if not self.as_sql:
|
||||
raise util.CommandError(
|
||||
"Can't use literal_binds setting without as_sql mode")
|
||||
|
||||
@classmethod
|
||||
def get_by_dialect(cls, dialect):
|
||||
return _impls[dialect.name]
|
||||
|
||||
def static_output(self, text):
|
||||
self.output_buffer.write(text_type(text + "\n\n"))
|
||||
self.output_buffer.flush()
|
||||
|
||||
def requires_recreate_in_batch(self, batch_op):
|
||||
"""Return True if the given :class:`.BatchOperationsImpl`
|
||||
would need the table to be recreated and copied in order to
|
||||
proceed.
|
||||
|
||||
Normally, only returns True on SQLite when operations other
|
||||
than add_column are present.
|
||||
|
||||
"""
|
||||
return False
|
||||
|
||||
def prep_table_for_batch(self, table):
|
||||
"""perform any operations needed on a table before a new
|
||||
one is created to replace it in batch mode.
|
||||
|
||||
the PG dialect uses this to drop constraints on the table
|
||||
before the new one uses those same names.
|
||||
|
||||
"""
|
||||
|
||||
@property
|
||||
def bind(self):
|
||||
return self.connection
|
||||
|
||||
def _exec(self, construct, execution_options=None,
|
||||
multiparams=(),
|
||||
params=util.immutabledict()):
|
||||
if isinstance(construct, string_types):
|
||||
construct = text(construct)
|
||||
if self.as_sql:
|
||||
if multiparams or params:
|
||||
# TODO: coverage
|
||||
raise Exception("Execution arguments not allowed with as_sql")
|
||||
|
||||
if self.literal_binds and not isinstance(
|
||||
construct, schema.DDLElement):
|
||||
compile_kw = dict(compile_kwargs={"literal_binds": True})
|
||||
else:
|
||||
compile_kw = {}
|
||||
|
||||
self.static_output(text_type(
|
||||
construct.compile(dialect=self.dialect, **compile_kw)
|
||||
).replace("\t", " ").strip() + self.command_terminator)
|
||||
else:
|
||||
conn = self.connection
|
||||
if execution_options:
|
||||
conn = conn.execution_options(**execution_options)
|
||||
return conn.execute(construct, *multiparams, **params)
|
||||
|
||||
def execute(self, sql, execution_options=None):
|
||||
self._exec(sql, execution_options)
|
||||
|
||||
def alter_column(self, table_name, column_name,
|
||||
nullable=None,
|
||||
server_default=False,
|
||||
name=None,
|
||||
type_=None,
|
||||
schema=None,
|
||||
autoincrement=None,
|
||||
existing_type=None,
|
||||
existing_server_default=None,
|
||||
existing_nullable=None,
|
||||
existing_autoincrement=None
|
||||
):
|
||||
if autoincrement is not None or existing_autoincrement is not None:
|
||||
util.warn(
|
||||
"autoincrement and existing_autoincrement "
|
||||
"only make sense for MySQL")
|
||||
if nullable is not None:
|
||||
self._exec(base.ColumnNullable(
|
||||
table_name, column_name,
|
||||
nullable, schema=schema,
|
||||
existing_type=existing_type,
|
||||
existing_server_default=existing_server_default,
|
||||
existing_nullable=existing_nullable,
|
||||
))
|
||||
if server_default is not False:
|
||||
self._exec(base.ColumnDefault(
|
||||
table_name, column_name, server_default,
|
||||
schema=schema,
|
||||
existing_type=existing_type,
|
||||
existing_server_default=existing_server_default,
|
||||
existing_nullable=existing_nullable,
|
||||
))
|
||||
if type_ is not None:
|
||||
self._exec(base.ColumnType(
|
||||
table_name, column_name, type_, schema=schema,
|
||||
existing_type=existing_type,
|
||||
existing_server_default=existing_server_default,
|
||||
existing_nullable=existing_nullable,
|
||||
))
|
||||
# do the new name last ;)
|
||||
if name is not None:
|
||||
self._exec(base.ColumnName(
|
||||
table_name, column_name, name, schema=schema,
|
||||
existing_type=existing_type,
|
||||
existing_server_default=existing_server_default,
|
||||
existing_nullable=existing_nullable,
|
||||
))
|
||||
|
||||
def add_column(self, table_name, column, schema=None):
|
||||
self._exec(base.AddColumn(table_name, column, schema=schema))
|
||||
|
||||
def drop_column(self, table_name, column, schema=None, **kw):
|
||||
self._exec(base.DropColumn(table_name, column, schema=schema))
|
||||
|
||||
def add_constraint(self, const):
|
||||
if const._create_rule is None or \
|
||||
const._create_rule(self):
|
||||
self._exec(schema.AddConstraint(const))
|
||||
|
||||
def drop_constraint(self, const):
|
||||
self._exec(schema.DropConstraint(const))
|
||||
|
||||
def rename_table(self, old_table_name, new_table_name, schema=None):
|
||||
self._exec(base.RenameTable(old_table_name,
|
||||
new_table_name, schema=schema))
|
||||
|
||||
def create_table(self, table):
|
||||
if util.sqla_07:
|
||||
table.dispatch.before_create(table, self.connection,
|
||||
checkfirst=False,
|
||||
_ddl_runner=self)
|
||||
self._exec(schema.CreateTable(table))
|
||||
if util.sqla_07:
|
||||
table.dispatch.after_create(table, self.connection,
|
||||
checkfirst=False,
|
||||
_ddl_runner=self)
|
||||
for index in table.indexes:
|
||||
self._exec(schema.CreateIndex(index))
|
||||
|
||||
def drop_table(self, table):
|
||||
self._exec(schema.DropTable(table))
|
||||
|
||||
def create_index(self, index):
|
||||
self._exec(schema.CreateIndex(index))
|
||||
|
||||
def drop_index(self, index):
|
||||
self._exec(schema.DropIndex(index))
|
||||
|
||||
def bulk_insert(self, table, rows, multiinsert=True):
|
||||
if not isinstance(rows, list):
|
||||
raise TypeError("List expected")
|
||||
elif rows and not isinstance(rows[0], dict):
|
||||
raise TypeError("List of dictionaries expected")
|
||||
if self.as_sql:
|
||||
for row in rows:
|
||||
self._exec(table.insert(inline=True).values(**dict(
|
||||
(k,
|
||||
sqla_compat._literal_bindparam(
|
||||
k, v, type_=table.c[k].type)
|
||||
if not isinstance(
|
||||
v, sqla_compat._literal_bindparam) else v)
|
||||
for k, v in row.items()
|
||||
)))
|
||||
else:
|
||||
# work around http://www.sqlalchemy.org/trac/ticket/2461
|
||||
if not hasattr(table, '_autoincrement_column'):
|
||||
table._autoincrement_column = None
|
||||
if rows:
|
||||
if multiinsert:
|
||||
self._exec(table.insert(inline=True), multiparams=rows)
|
||||
else:
|
||||
for row in rows:
|
||||
self._exec(table.insert(inline=True).values(**row))
|
||||
|
||||
def compare_type(self, inspector_column, metadata_column):
|
||||
|
||||
conn_type = inspector_column.type
|
||||
metadata_type = metadata_column.type
|
||||
|
||||
metadata_impl = metadata_type.dialect_impl(self.dialect)
|
||||
if isinstance(metadata_impl, sqltypes.Variant):
|
||||
metadata_impl = metadata_impl.impl.dialect_impl(self.dialect)
|
||||
|
||||
# work around SQLAlchemy bug "stale value for type affinity"
|
||||
# fixed in 0.7.4
|
||||
metadata_impl.__dict__.pop('_type_affinity', None)
|
||||
|
||||
if hasattr(metadata_impl, "compare_against_backend"):
|
||||
comparison = metadata_impl.compare_against_backend(
|
||||
self.dialect, conn_type)
|
||||
if comparison is not None:
|
||||
return not comparison
|
||||
|
||||
if conn_type._compare_type_affinity(
|
||||
metadata_impl
|
||||
):
|
||||
comparator = _type_comparators.get(conn_type._type_affinity, None)
|
||||
|
||||
return comparator and comparator(metadata_impl, conn_type)
|
||||
else:
|
||||
return True
|
||||
|
||||
def compare_server_default(self, inspector_column,
|
||||
metadata_column,
|
||||
rendered_metadata_default,
|
||||
rendered_inspector_default):
|
||||
return rendered_inspector_default != rendered_metadata_default
|
||||
|
||||
def correct_for_autogen_constraints(self, conn_uniques, conn_indexes,
|
||||
metadata_unique_constraints,
|
||||
metadata_indexes):
|
||||
pass
|
||||
|
||||
def _compat_autogen_column_reflect(self, inspector):
|
||||
if util.sqla_08:
|
||||
return self.autogen_column_reflect
|
||||
else:
|
||||
def adapt(table, column_info):
|
||||
return self.autogen_column_reflect(
|
||||
inspector, table, column_info)
|
||||
return adapt
|
||||
|
||||
def correct_for_autogen_foreignkeys(self, conn_fks, metadata_fks):
|
||||
pass
|
||||
|
||||
def autogen_column_reflect(self, inspector, table, column_info):
|
||||
"""A hook that is attached to the 'column_reflect' event for when
|
||||
a Table is reflected from the database during the autogenerate
|
||||
process.
|
||||
|
||||
Dialects can elect to modify the information gathered here.
|
||||
|
||||
"""
|
||||
|
||||
def start_migrations(self):
|
||||
"""A hook called when :meth:`.EnvironmentContext.run_migrations`
|
||||
is called.
|
||||
|
||||
Implementations can set up per-migration-run state here.
|
||||
|
||||
"""
|
||||
|
||||
def emit_begin(self):
|
||||
"""Emit the string ``BEGIN``, or the backend-specific
|
||||
equivalent, on the current connection context.
|
||||
|
||||
This is used in offline mode and typically
|
||||
via :meth:`.EnvironmentContext.begin_transaction`.
|
||||
|
||||
"""
|
||||
self.static_output("BEGIN" + self.command_terminator)
|
||||
|
||||
def emit_commit(self):
|
||||
"""Emit the string ``COMMIT``, or the backend-specific
|
||||
equivalent, on the current connection context.
|
||||
|
||||
This is used in offline mode and typically
|
||||
via :meth:`.EnvironmentContext.begin_transaction`.
|
||||
|
||||
"""
|
||||
self.static_output("COMMIT" + self.command_terminator)
|
||||
|
||||
def render_type(self, type_obj, autogen_context):
|
||||
return False
|
||||
|
||||
|
||||
def _string_compare(t1, t2):
|
||||
return \
|
||||
t1.length is not None and \
|
||||
t1.length != t2.length
|
||||
|
||||
|
||||
def _numeric_compare(t1, t2):
|
||||
return \
|
||||
(
|
||||
t1.precision is not None and
|
||||
t1.precision != t2.precision
|
||||
) or \
|
||||
(
|
||||
t1.scale is not None and
|
||||
t1.scale != t2.scale
|
||||
)
|
||||
|
||||
|
||||
def _integer_compare(t1, t2):
|
||||
t1_small_or_big = (
|
||||
'S' if isinstance(t1, sqltypes.SmallInteger)
|
||||
else 'B' if isinstance(t1, sqltypes.BigInteger) else 'I'
|
||||
)
|
||||
t2_small_or_big = (
|
||||
'S' if isinstance(t2, sqltypes.SmallInteger)
|
||||
else 'B' if isinstance(t2, sqltypes.BigInteger) else 'I'
|
||||
)
|
||||
return t1_small_or_big != t2_small_or_big
|
||||
|
||||
|
||||
def _datetime_compare(t1, t2):
|
||||
return (
|
||||
t1.timezone != t2.timezone
|
||||
)
|
||||
|
||||
|
||||
_type_comparators = {
|
||||
sqltypes.String: _string_compare,
|
||||
sqltypes.Numeric: _numeric_compare,
|
||||
sqltypes.Integer: _integer_compare,
|
||||
sqltypes.DateTime: _datetime_compare,
|
||||
}
|
|
@ -1,233 +0,0 @@
|
|||
from sqlalchemy.ext.compiler import compiles
|
||||
|
||||
from .. import util
|
||||
from .impl import DefaultImpl
|
||||
from .base import alter_table, AddColumn, ColumnName, RenameTable,\
|
||||
format_table_name, format_column_name, ColumnNullable, alter_column,\
|
||||
format_server_default, ColumnDefault, format_type, ColumnType
|
||||
from sqlalchemy.sql.expression import ClauseElement, Executable
|
||||
|
||||
|
||||
class MSSQLImpl(DefaultImpl):
|
||||
__dialect__ = 'mssql'
|
||||
transactional_ddl = True
|
||||
batch_separator = "GO"
|
||||
|
||||
def __init__(self, *arg, **kw):
|
||||
super(MSSQLImpl, self).__init__(*arg, **kw)
|
||||
self.batch_separator = self.context_opts.get(
|
||||
"mssql_batch_separator",
|
||||
self.batch_separator)
|
||||
|
||||
def _exec(self, construct, *args, **kw):
|
||||
result = super(MSSQLImpl, self)._exec(construct, *args, **kw)
|
||||
if self.as_sql and self.batch_separator:
|
||||
self.static_output(self.batch_separator)
|
||||
return result
|
||||
|
||||
def emit_begin(self):
|
||||
self.static_output("BEGIN TRANSACTION" + self.command_terminator)
|
||||
|
||||
def emit_commit(self):
|
||||
super(MSSQLImpl, self).emit_commit()
|
||||
if self.as_sql and self.batch_separator:
|
||||
self.static_output(self.batch_separator)
|
||||
|
||||
def alter_column(self, table_name, column_name,
|
||||
nullable=None,
|
||||
server_default=False,
|
||||
name=None,
|
||||
type_=None,
|
||||
schema=None,
|
||||
existing_type=None,
|
||||
existing_server_default=None,
|
||||
existing_nullable=None,
|
||||
**kw
|
||||
):
|
||||
|
||||
if nullable is not None and existing_type is None:
|
||||
if type_ is not None:
|
||||
existing_type = type_
|
||||
# the NULL/NOT NULL alter will handle
|
||||
# the type alteration
|
||||
type_ = None
|
||||
else:
|
||||
raise util.CommandError(
|
||||
"MS-SQL ALTER COLUMN operations "
|
||||
"with NULL or NOT NULL require the "
|
||||
"existing_type or a new type_ be passed.")
|
||||
|
||||
super(MSSQLImpl, self).alter_column(
|
||||
table_name, column_name,
|
||||
nullable=nullable,
|
||||
type_=type_,
|
||||
schema=schema,
|
||||
existing_type=existing_type,
|
||||
existing_nullable=existing_nullable,
|
||||
**kw
|
||||
)
|
||||
|
||||
if server_default is not False:
|
||||
if existing_server_default is not False or \
|
||||
server_default is None:
|
||||
self._exec(
|
||||
_ExecDropConstraint(
|
||||
table_name, column_name,
|
||||
'sys.default_constraints')
|
||||
)
|
||||
if server_default is not None:
|
||||
super(MSSQLImpl, self).alter_column(
|
||||
table_name, column_name,
|
||||
schema=schema,
|
||||
server_default=server_default)
|
||||
|
||||
if name is not None:
|
||||
super(MSSQLImpl, self).alter_column(
|
||||
table_name, column_name,
|
||||
schema=schema,
|
||||
name=name)
|
||||
|
||||
def bulk_insert(self, table, rows, **kw):
|
||||
if self.as_sql:
|
||||
self._exec(
|
||||
"SET IDENTITY_INSERT %s ON" %
|
||||
self.dialect.identifier_preparer.format_table(table)
|
||||
)
|
||||
super(MSSQLImpl, self).bulk_insert(table, rows, **kw)
|
||||
self._exec(
|
||||
"SET IDENTITY_INSERT %s OFF" %
|
||||
self.dialect.identifier_preparer.format_table(table)
|
||||
)
|
||||
else:
|
||||
super(MSSQLImpl, self).bulk_insert(table, rows, **kw)
|
||||
|
||||
def drop_column(self, table_name, column, **kw):
|
||||
drop_default = kw.pop('mssql_drop_default', False)
|
||||
if drop_default:
|
||||
self._exec(
|
||||
_ExecDropConstraint(
|
||||
table_name, column,
|
||||
'sys.default_constraints')
|
||||
)
|
||||
drop_check = kw.pop('mssql_drop_check', False)
|
||||
if drop_check:
|
||||
self._exec(
|
||||
_ExecDropConstraint(
|
||||
table_name, column,
|
||||
'sys.check_constraints')
|
||||
)
|
||||
drop_fks = kw.pop('mssql_drop_foreign_key', False)
|
||||
if drop_fks:
|
||||
self._exec(
|
||||
_ExecDropFKConstraint(table_name, column)
|
||||
)
|
||||
super(MSSQLImpl, self).drop_column(table_name, column, **kw)
|
||||
|
||||
|
||||
class _ExecDropConstraint(Executable, ClauseElement):
|
||||
|
||||
def __init__(self, tname, colname, type_):
|
||||
self.tname = tname
|
||||
self.colname = colname
|
||||
self.type_ = type_
|
||||
|
||||
|
||||
class _ExecDropFKConstraint(Executable, ClauseElement):
|
||||
|
||||
def __init__(self, tname, colname):
|
||||
self.tname = tname
|
||||
self.colname = colname
|
||||
|
||||
|
||||
@compiles(_ExecDropConstraint, 'mssql')
|
||||
def _exec_drop_col_constraint(element, compiler, **kw):
|
||||
tname, colname, type_ = element.tname, element.colname, element.type_
|
||||
# from http://www.mssqltips.com/sqlservertip/1425/\
|
||||
# working-with-default-constraints-in-sql-server/
|
||||
# TODO: needs table formatting, etc.
|
||||
return """declare @const_name varchar(256)
|
||||
select @const_name = [name] from %(type)s
|
||||
where parent_object_id = object_id('%(tname)s')
|
||||
and col_name(parent_object_id, parent_column_id) = '%(colname)s'
|
||||
exec('alter table %(tname_quoted)s drop constraint ' + @const_name)""" % {
|
||||
'type': type_,
|
||||
'tname': tname,
|
||||
'colname': colname,
|
||||
'tname_quoted': format_table_name(compiler, tname, None),
|
||||
}
|
||||
|
||||
|
||||
@compiles(_ExecDropFKConstraint, 'mssql')
|
||||
def _exec_drop_col_fk_constraint(element, compiler, **kw):
|
||||
tname, colname = element.tname, element.colname
|
||||
|
||||
return """declare @const_name varchar(256)
|
||||
select @const_name = [name] from
|
||||
sys.foreign_keys fk join sys.foreign_key_columns fkc
|
||||
on fk.object_id=fkc.constraint_object_id
|
||||
where fkc.parent_object_id = object_id('%(tname)s')
|
||||
and col_name(fkc.parent_object_id, fkc.parent_column_id) = '%(colname)s'
|
||||
exec('alter table %(tname_quoted)s drop constraint ' + @const_name)""" % {
|
||||
'tname': tname,
|
||||
'colname': colname,
|
||||
'tname_quoted': format_table_name(compiler, tname, None),
|
||||
}
|
||||
|
||||
|
||||
@compiles(AddColumn, 'mssql')
|
||||
def visit_add_column(element, compiler, **kw):
|
||||
return "%s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
mssql_add_column(compiler, element.column, **kw)
|
||||
)
|
||||
|
||||
|
||||
def mssql_add_column(compiler, column, **kw):
|
||||
return "ADD %s" % compiler.get_column_specification(column, **kw)
|
||||
|
||||
|
||||
@compiles(ColumnNullable, 'mssql')
|
||||
def visit_column_nullable(element, compiler, **kw):
|
||||
return "%s %s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
alter_column(compiler, element.column_name),
|
||||
format_type(compiler, element.existing_type),
|
||||
"NULL" if element.nullable else "NOT NULL"
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnDefault, 'mssql')
|
||||
def visit_column_default(element, compiler, **kw):
|
||||
# TODO: there can also be a named constraint
|
||||
# with ADD CONSTRAINT here
|
||||
return "%s ADD DEFAULT %s FOR %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
format_server_default(compiler, element.default),
|
||||
format_column_name(compiler, element.column_name)
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnName, 'mssql')
|
||||
def visit_rename_column(element, compiler, **kw):
|
||||
return "EXEC sp_rename '%s.%s', %s, 'COLUMN'" % (
|
||||
format_table_name(compiler, element.table_name, element.schema),
|
||||
format_column_name(compiler, element.column_name),
|
||||
format_column_name(compiler, element.newname)
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnType, 'mssql')
|
||||
def visit_column_type(element, compiler, **kw):
|
||||
return "%s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
alter_column(compiler, element.column_name),
|
||||
format_type(compiler, element.type_)
|
||||
)
|
||||
|
||||
|
||||
@compiles(RenameTable, 'mssql')
|
||||
def visit_rename_table(element, compiler, **kw):
|
||||
return "EXEC sp_rename '%s', %s" % (
|
||||
format_table_name(compiler, element.table_name, element.schema),
|
||||
format_table_name(compiler, element.new_table_name, None)
|
||||
)
|
|
@ -1,332 +0,0 @@
|
|||
from sqlalchemy.ext.compiler import compiles
|
||||
from sqlalchemy import types as sqltypes
|
||||
from sqlalchemy import schema
|
||||
|
||||
from ..util.compat import string_types
|
||||
from .. import util
|
||||
from .impl import DefaultImpl
|
||||
from .base import ColumnNullable, ColumnName, ColumnDefault, \
|
||||
ColumnType, AlterColumn, format_column_name, \
|
||||
format_server_default
|
||||
from .base import alter_table
|
||||
from ..autogenerate import compare
|
||||
from ..util.sqla_compat import _is_type_bound, sqla_100
|
||||
|
||||
|
||||
class MySQLImpl(DefaultImpl):
|
||||
__dialect__ = 'mysql'
|
||||
|
||||
transactional_ddl = False
|
||||
|
||||
def alter_column(self, table_name, column_name,
|
||||
nullable=None,
|
||||
server_default=False,
|
||||
name=None,
|
||||
type_=None,
|
||||
schema=None,
|
||||
existing_type=None,
|
||||
existing_server_default=None,
|
||||
existing_nullable=None,
|
||||
autoincrement=None,
|
||||
existing_autoincrement=None,
|
||||
**kw
|
||||
):
|
||||
if name is not None:
|
||||
self._exec(
|
||||
MySQLChangeColumn(
|
||||
table_name, column_name,
|
||||
schema=schema,
|
||||
newname=name,
|
||||
nullable=nullable if nullable is not None else
|
||||
existing_nullable
|
||||
if existing_nullable is not None
|
||||
else True,
|
||||
type_=type_ if type_ is not None else existing_type,
|
||||
default=server_default if server_default is not False
|
||||
else existing_server_default,
|
||||
autoincrement=autoincrement if autoincrement is not None
|
||||
else existing_autoincrement
|
||||
)
|
||||
)
|
||||
elif nullable is not None or \
|
||||
type_ is not None or \
|
||||
autoincrement is not None:
|
||||
self._exec(
|
||||
MySQLModifyColumn(
|
||||
table_name, column_name,
|
||||
schema=schema,
|
||||
newname=name if name is not None else column_name,
|
||||
nullable=nullable if nullable is not None else
|
||||
existing_nullable
|
||||
if existing_nullable is not None
|
||||
else True,
|
||||
type_=type_ if type_ is not None else existing_type,
|
||||
default=server_default if server_default is not False
|
||||
else existing_server_default,
|
||||
autoincrement=autoincrement if autoincrement is not None
|
||||
else existing_autoincrement
|
||||
)
|
||||
)
|
||||
elif server_default is not False:
|
||||
self._exec(
|
||||
MySQLAlterDefault(
|
||||
table_name, column_name, server_default,
|
||||
schema=schema,
|
||||
)
|
||||
)
|
||||
|
||||
def drop_constraint(self, const):
|
||||
if isinstance(const, schema.CheckConstraint) and _is_type_bound(const):
|
||||
return
|
||||
|
||||
super(MySQLImpl, self).drop_constraint(const)
|
||||
|
||||
def compare_server_default(self, inspector_column,
|
||||
metadata_column,
|
||||
rendered_metadata_default,
|
||||
rendered_inspector_default):
|
||||
# partially a workaround for SQLAlchemy issue #3023; if the
|
||||
# column were created without "NOT NULL", MySQL may have added
|
||||
# an implicit default of '0' which we need to skip
|
||||
if metadata_column.type._type_affinity is sqltypes.Integer and \
|
||||
inspector_column.primary_key and \
|
||||
not inspector_column.autoincrement and \
|
||||
not rendered_metadata_default and \
|
||||
rendered_inspector_default == "'0'":
|
||||
return False
|
||||
else:
|
||||
return rendered_inspector_default != rendered_metadata_default
|
||||
|
||||
def correct_for_autogen_constraints(self, conn_unique_constraints,
|
||||
conn_indexes,
|
||||
metadata_unique_constraints,
|
||||
metadata_indexes):
|
||||
|
||||
# TODO: if SQLA 1.0, make use of "duplicates_index"
|
||||
# metadata
|
||||
removed = set()
|
||||
for idx in list(conn_indexes):
|
||||
if idx.unique:
|
||||
continue
|
||||
# MySQL puts implicit indexes on FK columns, even if
|
||||
# composite and even if MyISAM, so can't check this too easily.
|
||||
# the name of the index may be the column name or it may
|
||||
# be the name of the FK constraint.
|
||||
for col in idx.columns:
|
||||
if idx.name == col.name:
|
||||
conn_indexes.remove(idx)
|
||||
removed.add(idx.name)
|
||||
break
|
||||
for fk in col.foreign_keys:
|
||||
if fk.name == idx.name:
|
||||
conn_indexes.remove(idx)
|
||||
removed.add(idx.name)
|
||||
break
|
||||
if idx.name in removed:
|
||||
break
|
||||
|
||||
# then remove indexes from the "metadata_indexes"
|
||||
# that we've removed from reflected, otherwise they come out
|
||||
# as adds (see #202)
|
||||
for idx in list(metadata_indexes):
|
||||
if idx.name in removed:
|
||||
metadata_indexes.remove(idx)
|
||||
|
||||
if not sqla_100:
|
||||
self._legacy_correct_for_dupe_uq_uix(
|
||||
conn_unique_constraints,
|
||||
conn_indexes,
|
||||
metadata_unique_constraints,
|
||||
metadata_indexes
|
||||
)
|
||||
|
||||
def _legacy_correct_for_dupe_uq_uix(self, conn_unique_constraints,
|
||||
conn_indexes,
|
||||
metadata_unique_constraints,
|
||||
metadata_indexes):
|
||||
|
||||
# then dedupe unique indexes vs. constraints, since MySQL
|
||||
# doesn't really have unique constraints as a separate construct.
|
||||
# but look in the metadata and try to maintain constructs
|
||||
# that already seem to be defined one way or the other
|
||||
# on that side. See #276
|
||||
metadata_uq_names = set([
|
||||
cons.name for cons in metadata_unique_constraints
|
||||
if cons.name is not None])
|
||||
|
||||
unnamed_metadata_uqs = set([
|
||||
compare._uq_constraint_sig(cons).sig
|
||||
for cons in metadata_unique_constraints
|
||||
if cons.name is None
|
||||
])
|
||||
|
||||
metadata_ix_names = set([
|
||||
cons.name for cons in metadata_indexes if cons.unique])
|
||||
conn_uq_names = dict(
|
||||
(cons.name, cons) for cons in conn_unique_constraints
|
||||
)
|
||||
conn_ix_names = dict(
|
||||
(cons.name, cons) for cons in conn_indexes if cons.unique
|
||||
)
|
||||
|
||||
for overlap in set(conn_uq_names).intersection(conn_ix_names):
|
||||
if overlap not in metadata_uq_names:
|
||||
if compare._uq_constraint_sig(conn_uq_names[overlap]).sig \
|
||||
not in unnamed_metadata_uqs:
|
||||
|
||||
conn_unique_constraints.discard(conn_uq_names[overlap])
|
||||
elif overlap not in metadata_ix_names:
|
||||
conn_indexes.discard(conn_ix_names[overlap])
|
||||
|
||||
def correct_for_autogen_foreignkeys(self, conn_fks, metadata_fks):
|
||||
conn_fk_by_sig = dict(
|
||||
(compare._fk_constraint_sig(fk).sig, fk) for fk in conn_fks
|
||||
)
|
||||
metadata_fk_by_sig = dict(
|
||||
(compare._fk_constraint_sig(fk).sig, fk) for fk in metadata_fks
|
||||
)
|
||||
|
||||
for sig in set(conn_fk_by_sig).intersection(metadata_fk_by_sig):
|
||||
mdfk = metadata_fk_by_sig[sig]
|
||||
cnfk = conn_fk_by_sig[sig]
|
||||
# MySQL considers RESTRICT to be the default and doesn't
|
||||
# report on it. if the model has explicit RESTRICT and
|
||||
# the conn FK has None, set it to RESTRICT
|
||||
if mdfk.ondelete is not None and \
|
||||
mdfk.ondelete.lower() == 'restrict' and \
|
||||
cnfk.ondelete is None:
|
||||
cnfk.ondelete = 'RESTRICT'
|
||||
if mdfk.onupdate is not None and \
|
||||
mdfk.onupdate.lower() == 'restrict' and \
|
||||
cnfk.onupdate is None:
|
||||
cnfk.onupdate = 'RESTRICT'
|
||||
|
||||
|
||||
class MySQLAlterDefault(AlterColumn):
|
||||
|
||||
def __init__(self, name, column_name, default, schema=None):
|
||||
super(AlterColumn, self).__init__(name, schema=schema)
|
||||
self.column_name = column_name
|
||||
self.default = default
|
||||
|
||||
|
||||
class MySQLChangeColumn(AlterColumn):
|
||||
|
||||
def __init__(self, name, column_name, schema=None,
|
||||
newname=None,
|
||||
type_=None,
|
||||
nullable=None,
|
||||
default=False,
|
||||
autoincrement=None):
|
||||
super(AlterColumn, self).__init__(name, schema=schema)
|
||||
self.column_name = column_name
|
||||
self.nullable = nullable
|
||||
self.newname = newname
|
||||
self.default = default
|
||||
self.autoincrement = autoincrement
|
||||
if type_ is None:
|
||||
raise util.CommandError(
|
||||
"All MySQL CHANGE/MODIFY COLUMN operations "
|
||||
"require the existing type."
|
||||
)
|
||||
|
||||
self.type_ = sqltypes.to_instance(type_)
|
||||
|
||||
|
||||
class MySQLModifyColumn(MySQLChangeColumn):
|
||||
pass
|
||||
|
||||
|
||||
@compiles(ColumnNullable, 'mysql')
|
||||
@compiles(ColumnName, 'mysql')
|
||||
@compiles(ColumnDefault, 'mysql')
|
||||
@compiles(ColumnType, 'mysql')
|
||||
def _mysql_doesnt_support_individual(element, compiler, **kw):
|
||||
raise NotImplementedError(
|
||||
"Individual alter column constructs not supported by MySQL"
|
||||
)
|
||||
|
||||
|
||||
@compiles(MySQLAlterDefault, "mysql")
|
||||
def _mysql_alter_default(element, compiler, **kw):
|
||||
return "%s ALTER COLUMN %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
format_column_name(compiler, element.column_name),
|
||||
"SET DEFAULT %s" % format_server_default(compiler, element.default)
|
||||
if element.default is not None
|
||||
else "DROP DEFAULT"
|
||||
)
|
||||
|
||||
|
||||
@compiles(MySQLModifyColumn, "mysql")
|
||||
def _mysql_modify_column(element, compiler, **kw):
|
||||
return "%s MODIFY %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
format_column_name(compiler, element.column_name),
|
||||
_mysql_colspec(
|
||||
compiler,
|
||||
nullable=element.nullable,
|
||||
server_default=element.default,
|
||||
type_=element.type_,
|
||||
autoincrement=element.autoincrement
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
@compiles(MySQLChangeColumn, "mysql")
|
||||
def _mysql_change_column(element, compiler, **kw):
|
||||
return "%s CHANGE %s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
format_column_name(compiler, element.column_name),
|
||||
format_column_name(compiler, element.newname),
|
||||
_mysql_colspec(
|
||||
compiler,
|
||||
nullable=element.nullable,
|
||||
server_default=element.default,
|
||||
type_=element.type_,
|
||||
autoincrement=element.autoincrement
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
def _render_value(compiler, expr):
|
||||
if isinstance(expr, string_types):
|
||||
return "'%s'" % expr
|
||||
else:
|
||||
return compiler.sql_compiler.process(expr)
|
||||
|
||||
|
||||
def _mysql_colspec(compiler, nullable, server_default, type_,
|
||||
autoincrement):
|
||||
spec = "%s %s" % (
|
||||
compiler.dialect.type_compiler.process(type_),
|
||||
"NULL" if nullable else "NOT NULL"
|
||||
)
|
||||
if autoincrement:
|
||||
spec += " AUTO_INCREMENT"
|
||||
if server_default is not False and server_default is not None:
|
||||
spec += " DEFAULT %s" % _render_value(compiler, server_default)
|
||||
|
||||
return spec
|
||||
|
||||
|
||||
@compiles(schema.DropConstraint, "mysql")
|
||||
def _mysql_drop_constraint(element, compiler, **kw):
|
||||
"""Redefine SQLAlchemy's drop constraint to
|
||||
raise errors for invalid constraint type."""
|
||||
|
||||
constraint = element.element
|
||||
if isinstance(constraint, (schema.ForeignKeyConstraint,
|
||||
schema.PrimaryKeyConstraint,
|
||||
schema.UniqueConstraint)
|
||||
):
|
||||
return compiler.visit_drop_constraint(element, **kw)
|
||||
elif isinstance(constraint, schema.CheckConstraint):
|
||||
raise NotImplementedError(
|
||||
"MySQL does not support CHECK constraints.")
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"No generic 'DROP CONSTRAINT' in MySQL - "
|
||||
"please specify constraint type")
|
||||
|
||||
|
|
@ -1,86 +0,0 @@
|
|||
from sqlalchemy.ext.compiler import compiles
|
||||
|
||||
from .impl import DefaultImpl
|
||||
from .base import alter_table, AddColumn, ColumnName, \
|
||||
format_column_name, ColumnNullable, \
|
||||
format_server_default, ColumnDefault, format_type, ColumnType
|
||||
|
||||
|
||||
class OracleImpl(DefaultImpl):
|
||||
__dialect__ = 'oracle'
|
||||
transactional_ddl = False
|
||||
batch_separator = "/"
|
||||
command_terminator = ""
|
||||
|
||||
def __init__(self, *arg, **kw):
|
||||
super(OracleImpl, self).__init__(*arg, **kw)
|
||||
self.batch_separator = self.context_opts.get(
|
||||
"oracle_batch_separator",
|
||||
self.batch_separator)
|
||||
|
||||
def _exec(self, construct, *args, **kw):
|
||||
result = super(OracleImpl, self)._exec(construct, *args, **kw)
|
||||
if self.as_sql and self.batch_separator:
|
||||
self.static_output(self.batch_separator)
|
||||
return result
|
||||
|
||||
def emit_begin(self):
|
||||
self._exec("SET TRANSACTION READ WRITE")
|
||||
|
||||
def emit_commit(self):
|
||||
self._exec("COMMIT")
|
||||
|
||||
|
||||
@compiles(AddColumn, 'oracle')
|
||||
def visit_add_column(element, compiler, **kw):
|
||||
return "%s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
add_column(compiler, element.column, **kw),
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnNullable, 'oracle')
|
||||
def visit_column_nullable(element, compiler, **kw):
|
||||
return "%s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
alter_column(compiler, element.column_name),
|
||||
"NULL" if element.nullable else "NOT NULL"
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnType, 'oracle')
|
||||
def visit_column_type(element, compiler, **kw):
|
||||
return "%s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
alter_column(compiler, element.column_name),
|
||||
"%s" % format_type(compiler, element.type_)
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnName, 'oracle')
|
||||
def visit_column_name(element, compiler, **kw):
|
||||
return "%s RENAME COLUMN %s TO %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
format_column_name(compiler, element.column_name),
|
||||
format_column_name(compiler, element.newname)
|
||||
)
|
||||
|
||||
|
||||
@compiles(ColumnDefault, 'oracle')
|
||||
def visit_column_default(element, compiler, **kw):
|
||||
return "%s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
alter_column(compiler, element.column_name),
|
||||
"DEFAULT %s" %
|
||||
format_server_default(compiler, element.default)
|
||||
if element.default is not None
|
||||
else "DEFAULT NULL"
|
||||
)
|
||||
|
||||
|
||||
def alter_column(compiler, name):
|
||||
return 'MODIFY %s' % format_column_name(compiler, name)
|
||||
|
||||
|
||||
def add_column(compiler, column, **kw):
|
||||
return "ADD %s" % compiler.get_column_specification(column, **kw)
|
|
@ -1,458 +0,0 @@
|
|||
import re
|
||||
|
||||
from ..util import compat
|
||||
from .. import util
|
||||
from .base import compiles, alter_column, alter_table, format_table_name, \
|
||||
format_type, AlterColumn, RenameTable
|
||||
from .impl import DefaultImpl
|
||||
from sqlalchemy.dialects.postgresql import INTEGER, BIGINT
|
||||
from ..autogenerate import render
|
||||
from sqlalchemy import text, Numeric, Column
|
||||
from sqlalchemy.types import NULLTYPE
|
||||
from sqlalchemy import types as sqltypes
|
||||
|
||||
from ..operations.base import Operations
|
||||
from ..operations.base import BatchOperations
|
||||
from ..operations import ops
|
||||
from ..util import sqla_compat
|
||||
from ..operations import schemaobj
|
||||
from ..autogenerate import render
|
||||
|
||||
import logging
|
||||
|
||||
if util.sqla_08:
|
||||
from sqlalchemy.sql.expression import UnaryExpression
|
||||
else:
|
||||
from sqlalchemy.sql.expression import _UnaryExpression as UnaryExpression
|
||||
|
||||
if util.sqla_100:
|
||||
from sqlalchemy.dialects.postgresql import ExcludeConstraint
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PostgresqlImpl(DefaultImpl):
|
||||
__dialect__ = 'postgresql'
|
||||
transactional_ddl = True
|
||||
|
||||
def prep_table_for_batch(self, table):
|
||||
for constraint in table.constraints:
|
||||
if constraint.name is not None:
|
||||
self.drop_constraint(constraint)
|
||||
|
||||
def compare_server_default(self, inspector_column,
|
||||
metadata_column,
|
||||
rendered_metadata_default,
|
||||
rendered_inspector_default):
|
||||
# don't do defaults for SERIAL columns
|
||||
if metadata_column.primary_key and \
|
||||
metadata_column is metadata_column.table._autoincrement_column:
|
||||
return False
|
||||
|
||||
conn_col_default = rendered_inspector_default
|
||||
|
||||
defaults_equal = conn_col_default == rendered_metadata_default
|
||||
if defaults_equal:
|
||||
return False
|
||||
|
||||
if None in (conn_col_default, rendered_metadata_default):
|
||||
return not defaults_equal
|
||||
|
||||
if metadata_column.server_default is not None and \
|
||||
isinstance(metadata_column.server_default.arg,
|
||||
compat.string_types) and \
|
||||
not re.match(r"^'.+'$", rendered_metadata_default) and \
|
||||
not isinstance(inspector_column.type, Numeric):
|
||||
# don't single quote if the column type is float/numeric,
|
||||
# otherwise a comparison such as SELECT 5 = '5.0' will fail
|
||||
rendered_metadata_default = re.sub(
|
||||
r"^u?'?|'?$", "'", rendered_metadata_default)
|
||||
|
||||
return not self.connection.scalar(
|
||||
"SELECT %s = %s" % (
|
||||
conn_col_default,
|
||||
rendered_metadata_default
|
||||
)
|
||||
)
|
||||
|
||||
def alter_column(self, table_name, column_name,
|
||||
nullable=None,
|
||||
server_default=False,
|
||||
name=None,
|
||||
type_=None,
|
||||
schema=None,
|
||||
autoincrement=None,
|
||||
existing_type=None,
|
||||
existing_server_default=None,
|
||||
existing_nullable=None,
|
||||
existing_autoincrement=None,
|
||||
**kw
|
||||
):
|
||||
|
||||
using = kw.pop('postgresql_using', None)
|
||||
|
||||
if using is not None and type_ is None:
|
||||
raise util.CommandError(
|
||||
"postgresql_using must be used with the type_ parameter")
|
||||
|
||||
if type_ is not None:
|
||||
self._exec(PostgresqlColumnType(
|
||||
table_name, column_name, type_, schema=schema,
|
||||
using=using, existing_type=existing_type,
|
||||
existing_server_default=existing_server_default,
|
||||
existing_nullable=existing_nullable,
|
||||
))
|
||||
|
||||
super(PostgresqlImpl, self).alter_column(
|
||||
table_name, column_name,
|
||||
nullable=nullable,
|
||||
server_default=server_default,
|
||||
name=name,
|
||||
schema=schema,
|
||||
autoincrement=autoincrement,
|
||||
existing_type=existing_type,
|
||||
existing_server_default=existing_server_default,
|
||||
existing_nullable=existing_nullable,
|
||||
existing_autoincrement=existing_autoincrement,
|
||||
**kw)
|
||||
|
||||
def autogen_column_reflect(self, inspector, table, column_info):
|
||||
if column_info.get('default') and \
|
||||
isinstance(column_info['type'], (INTEGER, BIGINT)):
|
||||
seq_match = re.match(
|
||||
r"nextval\('(.+?)'::regclass\)",
|
||||
column_info['default'])
|
||||
if seq_match:
|
||||
info = inspector.bind.execute(text(
|
||||
"select c.relname, a.attname "
|
||||
"from pg_class as c join pg_depend d on d.objid=c.oid and "
|
||||
"d.classid='pg_class'::regclass and "
|
||||
"d.refclassid='pg_class'::regclass "
|
||||
"join pg_class t on t.oid=d.refobjid "
|
||||
"join pg_attribute a on a.attrelid=t.oid and "
|
||||
"a.attnum=d.refobjsubid "
|
||||
"where c.relkind='S' and c.relname=:seqname"
|
||||
), seqname=seq_match.group(1)).first()
|
||||
if info:
|
||||
seqname, colname = info
|
||||
if colname == column_info['name']:
|
||||
log.info(
|
||||
"Detected sequence named '%s' as "
|
||||
"owned by integer column '%s(%s)', "
|
||||
"assuming SERIAL and omitting",
|
||||
seqname, table.name, colname)
|
||||
# sequence, and the owner is this column,
|
||||
# its a SERIAL - whack it!
|
||||
del column_info['default']
|
||||
|
||||
def correct_for_autogen_constraints(self, conn_unique_constraints,
|
||||
conn_indexes,
|
||||
metadata_unique_constraints,
|
||||
metadata_indexes):
|
||||
conn_uniques_by_name = dict(
|
||||
(c.name, c) for c in conn_unique_constraints)
|
||||
conn_indexes_by_name = dict(
|
||||
(c.name, c) for c in conn_indexes)
|
||||
|
||||
# TODO: if SQLA 1.0, make use of "duplicates_constraint"
|
||||
# metadata
|
||||
doubled_constraints = dict(
|
||||
(name, (conn_uniques_by_name[name], conn_indexes_by_name[name]))
|
||||
for name in set(conn_uniques_by_name).intersection(
|
||||
conn_indexes_by_name)
|
||||
)
|
||||
for name, (uq, ix) in doubled_constraints.items():
|
||||
conn_indexes.remove(ix)
|
||||
|
||||
for idx in list(metadata_indexes):
|
||||
if idx.name in conn_indexes_by_name:
|
||||
continue
|
||||
if util.sqla_08:
|
||||
exprs = idx.expressions
|
||||
else:
|
||||
exprs = idx.columns
|
||||
for expr in exprs:
|
||||
while isinstance(expr, UnaryExpression):
|
||||
expr = expr.element
|
||||
if not isinstance(expr, Column):
|
||||
util.warn(
|
||||
"autogenerate skipping functional index %s; "
|
||||
"not supported by SQLAlchemy reflection" % idx.name
|
||||
)
|
||||
metadata_indexes.discard(idx)
|
||||
|
||||
def render_type(self, type_, autogen_context):
|
||||
if hasattr(self, '_render_%s_type' % type_.__visit_name__):
|
||||
meth = getattr(self, '_render_%s_type' % type_.__visit_name__)
|
||||
return meth(type_, autogen_context)
|
||||
|
||||
return False
|
||||
|
||||
def _render_type_w_subtype(self, type_, autogen_context, attrname, regexp):
|
||||
outer_repr = repr(type_)
|
||||
inner_type = getattr(type_, attrname, None)
|
||||
if inner_type is None:
|
||||
return False
|
||||
|
||||
inner_repr = repr(inner_type)
|
||||
|
||||
inner_repr = re.sub(r'([\(\)])', r'\\\1', inner_repr)
|
||||
sub_type = render._repr_type(getattr(type_, attrname), autogen_context)
|
||||
outer_type = re.sub(
|
||||
regexp + inner_repr,
|
||||
r"\1%s" % sub_type, outer_repr)
|
||||
return "%s.%s" % ("postgresql", outer_type)
|
||||
|
||||
def _render_ARRAY_type(self, type_, autogen_context):
|
||||
return self._render_type_w_subtype(
|
||||
type_, autogen_context, 'item_type', r'(.+?\()'
|
||||
)
|
||||
|
||||
def _render_JSON_type(self, type_, autogen_context):
|
||||
return self._render_type_w_subtype(
|
||||
type_, autogen_context, 'astext_type', r'(.+?\(.*astext_type=)'
|
||||
)
|
||||
|
||||
def _render_JSONB_type(self, type_, autogen_context):
|
||||
return self._render_type_w_subtype(
|
||||
type_, autogen_context, 'astext_type', r'(.+?\(.*astext_type=)'
|
||||
)
|
||||
|
||||
|
||||
class PostgresqlColumnType(AlterColumn):
|
||||
|
||||
def __init__(self, name, column_name, type_, **kw):
|
||||
using = kw.pop('using', None)
|
||||
super(PostgresqlColumnType, self).__init__(name, column_name, **kw)
|
||||
self.type_ = sqltypes.to_instance(type_)
|
||||
self.using = using
|
||||
|
||||
|
||||
@compiles(RenameTable, "postgresql")
|
||||
def visit_rename_table(element, compiler, **kw):
|
||||
return "%s RENAME TO %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
format_table_name(compiler, element.new_table_name, None)
|
||||
)
|
||||
|
||||
|
||||
@compiles(PostgresqlColumnType, "postgresql")
|
||||
def visit_column_type(element, compiler, **kw):
|
||||
return "%s %s %s %s" % (
|
||||
alter_table(compiler, element.table_name, element.schema),
|
||||
alter_column(compiler, element.column_name),
|
||||
"TYPE %s" % format_type(compiler, element.type_),
|
||||
"USING %s" % element.using if element.using else ""
|
||||
)
|
||||
|
||||
|
||||
@Operations.register_operation("create_exclude_constraint")
|
||||
@BatchOperations.register_operation(
|
||||
"create_exclude_constraint", "batch_create_exclude_constraint")
|
||||
@ops.AddConstraintOp.register_add_constraint("exclude_constraint")
|
||||
class CreateExcludeConstraintOp(ops.AddConstraintOp):
|
||||
"""Represent a create exclude constraint operation."""
|
||||
|
||||
constraint_type = "exclude"
|
||||
|
||||
def __init__(
|
||||
self, constraint_name, table_name,
|
||||
elements, where=None, schema=None,
|
||||
_orig_constraint=None, **kw):
|
||||
self.constraint_name = constraint_name
|
||||
self.table_name = table_name
|
||||
self.elements = elements
|
||||
self.where = where
|
||||
self.schema = schema
|
||||
self._orig_constraint = _orig_constraint
|
||||
self.kw = kw
|
||||
|
||||
@classmethod
|
||||
def from_constraint(cls, constraint):
|
||||
constraint_table = sqla_compat._table_for_constraint(constraint)
|
||||
|
||||
return cls(
|
||||
constraint.name,
|
||||
constraint_table.name,
|
||||
[(expr, op) for expr, name, op in constraint._render_exprs],
|
||||
where=constraint.where,
|
||||
schema=constraint_table.schema,
|
||||
_orig_constraint=constraint,
|
||||
deferrable=constraint.deferrable,
|
||||
initially=constraint.initially,
|
||||
using=constraint.using
|
||||
)
|
||||
|
||||
def to_constraint(self, migration_context=None):
|
||||
if not util.sqla_100:
|
||||
raise NotImplementedError(
|
||||
"ExcludeConstraint not supported until SQLAlchemy 1.0")
|
||||
if self._orig_constraint is not None:
|
||||
return self._orig_constraint
|
||||
schema_obj = schemaobj.SchemaObjects(migration_context)
|
||||
t = schema_obj.table(self.table_name, schema=self.schema)
|
||||
excl = ExcludeConstraint(
|
||||
*self.elements,
|
||||
name=self.constraint_name,
|
||||
where=self.where,
|
||||
**self.kw
|
||||
)
|
||||
for expr, name, oper in excl._render_exprs:
|
||||
t.append_column(Column(name, NULLTYPE))
|
||||
t.append_constraint(excl)
|
||||
return excl
|
||||
|
||||
@classmethod
|
||||
def create_exclude_constraint(
|
||||
cls, operations,
|
||||
constraint_name, table_name, *elements, **kw):
|
||||
"""Issue an alter to create an EXCLUDE constraint using the
|
||||
current migration context.
|
||||
|
||||
.. note:: This method is Postgresql specific, and additionally
|
||||
requires at least SQLAlchemy 1.0.
|
||||
|
||||
e.g.::
|
||||
|
||||
from alembic import op
|
||||
|
||||
op.create_exclude_constraint(
|
||||
"user_excl",
|
||||
"user",
|
||||
|
||||
("period", '&&'),
|
||||
("group", '='),
|
||||
where=("group != 'some group'")
|
||||
|
||||
)
|
||||
|
||||
Note that the expressions work the same way as that of
|
||||
the ``ExcludeConstraint`` object itself; if plain strings are
|
||||
passed, quoting rules must be applied manually.
|
||||
|
||||
:param name: Name of the constraint.
|
||||
:param table_name: String name of the source table.
|
||||
:param elements: exclude conditions.
|
||||
:param where: SQL expression or SQL string with optional WHERE
|
||||
clause.
|
||||
:param deferrable: optional bool. If set, emit DEFERRABLE or
|
||||
NOT DEFERRABLE when issuing DDL for this constraint.
|
||||
:param initially: optional string. If set, emit INITIALLY <value>
|
||||
when issuing DDL for this constraint.
|
||||
:param schema: Optional schema name to operate within.
|
||||
|
||||
.. versionadded:: 0.9.0
|
||||
|
||||
"""
|
||||
op = cls(constraint_name, table_name, elements, **kw)
|
||||
return operations.invoke(op)
|
||||
|
||||
@classmethod
|
||||
def batch_create_exclude_constraint(
|
||||
cls, operations, constraint_name, *elements, **kw):
|
||||
"""Issue a "create exclude constraint" instruction using the
|
||||
current batch migration context.
|
||||
|
||||
.. note:: This method is Postgresql specific, and additionally
|
||||
requires at least SQLAlchemy 1.0.
|
||||
|
||||
.. versionadded:: 0.9.0
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.Operations.create_exclude_constraint`
|
||||
|
||||
"""
|
||||
kw['schema'] = operations.impl.schema
|
||||
op = cls(constraint_name, operations.impl.table_name, elements, **kw)
|
||||
return operations.invoke(op)
|
||||
|
||||
|
||||
@render.renderers.dispatch_for(CreateExcludeConstraintOp)
|
||||
def _add_exclude_constraint(autogen_context, op):
|
||||
return _exclude_constraint(
|
||||
op.to_constraint(),
|
||||
autogen_context,
|
||||
alter=True
|
||||
)
|
||||
|
||||
if util.sqla_100:
|
||||
@render._constraint_renderers.dispatch_for(ExcludeConstraint)
|
||||
def _render_inline_exclude_constraint(constraint, autogen_context):
|
||||
rendered = render._user_defined_render(
|
||||
"exclude", constraint, autogen_context)
|
||||
if rendered is not False:
|
||||
return rendered
|
||||
|
||||
return _exclude_constraint(constraint, autogen_context, False)
|
||||
|
||||
|
||||
def _postgresql_autogenerate_prefix(autogen_context):
|
||||
|
||||
imports = autogen_context.imports
|
||||
if imports is not None:
|
||||
imports.add("from sqlalchemy.dialects import postgresql")
|
||||
return "postgresql."
|
||||
|
||||
|
||||
def _exclude_constraint(constraint, autogen_context, alter):
|
||||
opts = []
|
||||
|
||||
has_batch = autogen_context._has_batch
|
||||
|
||||
if constraint.deferrable:
|
||||
opts.append(("deferrable", str(constraint.deferrable)))
|
||||
if constraint.initially:
|
||||
opts.append(("initially", str(constraint.initially)))
|
||||
if constraint.using:
|
||||
opts.append(("using", str(constraint.using)))
|
||||
if not has_batch and alter and constraint.table.schema:
|
||||
opts.append(("schema", render._ident(constraint.table.schema)))
|
||||
if not alter and constraint.name:
|
||||
opts.append(
|
||||
("name",
|
||||
render._render_gen_name(autogen_context, constraint.name)))
|
||||
|
||||
if alter:
|
||||
args = [
|
||||
repr(render._render_gen_name(
|
||||
autogen_context, constraint.name))]
|
||||
if not has_batch:
|
||||
args += [repr(render._ident(constraint.table.name))]
|
||||
args.extend([
|
||||
"(%s, %r)" % (
|
||||
render._render_potential_expr(
|
||||
sqltext, autogen_context, wrap_in_text=False),
|
||||
opstring
|
||||
)
|
||||
for sqltext, name, opstring in constraint._render_exprs
|
||||
])
|
||||
if constraint.where is not None:
|
||||
args.append(
|
||||
"where=%s" % render._render_potential_expr(
|
||||
constraint.where, autogen_context)
|
||||
)
|
||||
args.extend(["%s=%r" % (k, v) for k, v in opts])
|
||||
return "%(prefix)screate_exclude_constraint(%(args)s)" % {
|
||||
'prefix': render._alembic_autogenerate_prefix(autogen_context),
|
||||
'args': ", ".join(args)
|
||||
}
|
||||
else:
|
||||
args = [
|
||||
"(%s, %r)" % (
|
||||
render._render_potential_expr(
|
||||
sqltext, autogen_context, wrap_in_text=False),
|
||||
opstring
|
||||
) for sqltext, name, opstring in constraint._render_exprs
|
||||
]
|
||||
if constraint.where is not None:
|
||||
args.append(
|
||||
"where=%s" % render._render_potential_expr(
|
||||
constraint.where, autogen_context)
|
||||
)
|
||||
args.extend(["%s=%r" % (k, v) for k, v in opts])
|
||||
return "%(prefix)sExcludeConstraint(%(args)s)" % {
|
||||
"prefix": _postgresql_autogenerate_prefix(autogen_context),
|
||||
"args": ", ".join(args)
|
||||
}
|
|
@ -1,100 +0,0 @@
|
|||
from .. import util
|
||||
from .impl import DefaultImpl
|
||||
import re
|
||||
|
||||
|
||||
class SQLiteImpl(DefaultImpl):
|
||||
__dialect__ = 'sqlite'
|
||||
|
||||
transactional_ddl = False
|
||||
"""SQLite supports transactional DDL, but pysqlite does not:
|
||||
see: http://bugs.python.org/issue10740
|
||||
"""
|
||||
|
||||
def requires_recreate_in_batch(self, batch_op):
|
||||
"""Return True if the given :class:`.BatchOperationsImpl`
|
||||
would need the table to be recreated and copied in order to
|
||||
proceed.
|
||||
|
||||
Normally, only returns True on SQLite when operations other
|
||||
than add_column are present.
|
||||
|
||||
"""
|
||||
for op in batch_op.batch:
|
||||
if op[0] not in ('add_column', 'create_index', 'drop_index'):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def add_constraint(self, const):
|
||||
# attempt to distinguish between an
|
||||
# auto-gen constraint and an explicit one
|
||||
if const._create_rule is None:
|
||||
raise NotImplementedError(
|
||||
"No support for ALTER of constraints in SQLite dialect")
|
||||
elif const._create_rule(self):
|
||||
util.warn("Skipping unsupported ALTER for "
|
||||
"creation of implicit constraint")
|
||||
|
||||
def drop_constraint(self, const):
|
||||
if const._create_rule is None:
|
||||
raise NotImplementedError(
|
||||
"No support for ALTER of constraints in SQLite dialect")
|
||||
|
||||
def compare_server_default(self, inspector_column,
|
||||
metadata_column,
|
||||
rendered_metadata_default,
|
||||
rendered_inspector_default):
|
||||
|
||||
if rendered_metadata_default is not None:
|
||||
rendered_metadata_default = re.sub(
|
||||
r"^\"'|\"'$", "", rendered_metadata_default)
|
||||
if rendered_inspector_default is not None:
|
||||
rendered_inspector_default = re.sub(
|
||||
r"^\"'|\"'$", "", rendered_inspector_default)
|
||||
|
||||
return rendered_inspector_default != rendered_metadata_default
|
||||
|
||||
def correct_for_autogen_constraints(
|
||||
self, conn_unique_constraints, conn_indexes,
|
||||
metadata_unique_constraints,
|
||||
metadata_indexes):
|
||||
|
||||
if util.sqla_100:
|
||||
return
|
||||
|
||||
# adjustments to accommodate for SQLite unnamed unique constraints
|
||||
# not being reported from the backend; this was updated in
|
||||
# SQLA 1.0.
|
||||
|
||||
def uq_sig(uq):
|
||||
return tuple(sorted(uq.columns.keys()))
|
||||
|
||||
conn_unique_sigs = set(
|
||||
uq_sig(uq)
|
||||
for uq in conn_unique_constraints
|
||||
)
|
||||
|
||||
for idx in list(metadata_unique_constraints):
|
||||
# SQLite backend can't report on unnamed UNIQUE constraints,
|
||||
# so remove these, unless we see an exact signature match
|
||||
if idx.name is None and uq_sig(idx) not in conn_unique_sigs:
|
||||
metadata_unique_constraints.remove(idx)
|
||||
|
||||
|
||||
# @compiles(AddColumn, 'sqlite')
|
||||
# def visit_add_column(element, compiler, **kw):
|
||||
# return "%s %s" % (
|
||||
# alter_table(compiler, element.table_name, element.schema),
|
||||
# add_column(compiler, element.column, **kw)
|
||||
# )
|
||||
|
||||
|
||||
# def add_column(compiler, column, **kw):
|
||||
# text = "ADD COLUMN %s" % compiler.get_column_specification(column, **kw)
|
||||
# need to modify SQLAlchemy so that the CHECK associated with a Boolean
|
||||
# or Enum gets placed as part of the column constraints, not the Table
|
||||
# see ticket 98
|
||||
# for const in column.constraints:
|
||||
# text += compiler.process(AddConstraint(const))
|
||||
# return text
|
|
@ -1,6 +0,0 @@
|
|||
from .operations.base import Operations
|
||||
|
||||
# create proxy functions for
|
||||
# each method on the Operations class.
|
||||
Operations.create_module_class_proxy(globals(), locals())
|
||||
|
|
@ -1,6 +0,0 @@
|
|||
from .base import Operations, BatchOperations
|
||||
from .ops import MigrateOperation
|
||||
from . import toimpl
|
||||
|
||||
|
||||
__all__ = ['Operations', 'BatchOperations', 'MigrateOperation']
|
|
@ -1,444 +0,0 @@
|
|||
from contextlib import contextmanager
|
||||
|
||||
from .. import util
|
||||
from ..util import sqla_compat
|
||||
from . import batch
|
||||
from . import schemaobj
|
||||
from ..util.compat import exec_
|
||||
import textwrap
|
||||
import inspect
|
||||
|
||||
__all__ = ('Operations', 'BatchOperations')
|
||||
|
||||
try:
|
||||
from sqlalchemy.sql.naming import conv
|
||||
except:
|
||||
conv = None
|
||||
|
||||
|
||||
class Operations(util.ModuleClsProxy):
|
||||
|
||||
"""Define high level migration operations.
|
||||
|
||||
Each operation corresponds to some schema migration operation,
|
||||
executed against a particular :class:`.MigrationContext`
|
||||
which in turn represents connectivity to a database,
|
||||
or a file output stream.
|
||||
|
||||
While :class:`.Operations` is normally configured as
|
||||
part of the :meth:`.EnvironmentContext.run_migrations`
|
||||
method called from an ``env.py`` script, a standalone
|
||||
:class:`.Operations` instance can be
|
||||
made for use cases external to regular Alembic
|
||||
migrations by passing in a :class:`.MigrationContext`::
|
||||
|
||||
from alembic.migration import MigrationContext
|
||||
from alembic.operations import Operations
|
||||
|
||||
conn = myengine.connect()
|
||||
ctx = MigrationContext.configure(conn)
|
||||
op = Operations(ctx)
|
||||
|
||||
op.alter_column("t", "c", nullable=True)
|
||||
|
||||
Note that as of 0.8, most of the methods on this class are produced
|
||||
dynamically using the :meth:`.Operations.register_operation`
|
||||
method.
|
||||
|
||||
"""
|
||||
|
||||
_to_impl = util.Dispatcher()
|
||||
|
||||
def __init__(self, migration_context, impl=None):
|
||||
"""Construct a new :class:`.Operations`
|
||||
|
||||
:param migration_context: a :class:`.MigrationContext`
|
||||
instance.
|
||||
|
||||
"""
|
||||
self.migration_context = migration_context
|
||||
if impl is None:
|
||||
self.impl = migration_context.impl
|
||||
else:
|
||||
self.impl = impl
|
||||
|
||||
self.schema_obj = schemaobj.SchemaObjects(migration_context)
|
||||
|
||||
@classmethod
|
||||
def register_operation(cls, name, sourcename=None):
|
||||
"""Register a new operation for this class.
|
||||
|
||||
This method is normally used to add new operations
|
||||
to the :class:`.Operations` class, and possibly the
|
||||
:class:`.BatchOperations` class as well. All Alembic migration
|
||||
operations are implemented via this system, however the system
|
||||
is also available as a public API to facilitate adding custom
|
||||
operations.
|
||||
|
||||
.. versionadded:: 0.8.0
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`operation_plugins`
|
||||
|
||||
|
||||
"""
|
||||
def register(op_cls):
|
||||
if sourcename is None:
|
||||
fn = getattr(op_cls, name)
|
||||
source_name = fn.__name__
|
||||
else:
|
||||
fn = getattr(op_cls, sourcename)
|
||||
source_name = fn.__name__
|
||||
|
||||
spec = inspect.getargspec(fn)
|
||||
|
||||
name_args = spec[0]
|
||||
assert name_args[0:2] == ['cls', 'operations']
|
||||
|
||||
name_args[0:2] = ['self']
|
||||
|
||||
args = inspect.formatargspec(*spec)
|
||||
num_defaults = len(spec[3]) if spec[3] else 0
|
||||
if num_defaults:
|
||||
defaulted_vals = name_args[0 - num_defaults:]
|
||||
else:
|
||||
defaulted_vals = ()
|
||||
|
||||
apply_kw = inspect.formatargspec(
|
||||
name_args, spec[1], spec[2],
|
||||
defaulted_vals,
|
||||
formatvalue=lambda x: '=' + x)
|
||||
|
||||
func_text = textwrap.dedent("""\
|
||||
def %(name)s%(args)s:
|
||||
%(doc)r
|
||||
return op_cls.%(source_name)s%(apply_kw)s
|
||||
""" % {
|
||||
'name': name,
|
||||
'source_name': source_name,
|
||||
'args': args,
|
||||
'apply_kw': apply_kw,
|
||||
'doc': fn.__doc__,
|
||||
'meth': fn.__name__
|
||||
})
|
||||
globals_ = {'op_cls': op_cls}
|
||||
lcl = {}
|
||||
exec_(func_text, globals_, lcl)
|
||||
setattr(cls, name, lcl[name])
|
||||
fn.__func__.__doc__ = "This method is proxied on "\
|
||||
"the :class:`.%s` class, via the :meth:`.%s.%s` method." % (
|
||||
cls.__name__, cls.__name__, name
|
||||
)
|
||||
if hasattr(fn, '_legacy_translations'):
|
||||
lcl[name]._legacy_translations = fn._legacy_translations
|
||||
return op_cls
|
||||
return register
|
||||
|
||||
@classmethod
|
||||
def implementation_for(cls, op_cls):
|
||||
"""Register an implementation for a given :class:`.MigrateOperation`.
|
||||
|
||||
This is part of the operation extensibility API.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`operation_plugins` - example of use
|
||||
|
||||
"""
|
||||
|
||||
def decorate(fn):
|
||||
cls._to_impl.dispatch_for(op_cls)(fn)
|
||||
return fn
|
||||
return decorate
|
||||
|
||||
@classmethod
|
||||
@contextmanager
|
||||
def context(cls, migration_context):
|
||||
op = Operations(migration_context)
|
||||
op._install_proxy()
|
||||
yield op
|
||||
op._remove_proxy()
|
||||
|
||||
@contextmanager
|
||||
def batch_alter_table(
|
||||
self, table_name, schema=None, recreate="auto", copy_from=None,
|
||||
table_args=(), table_kwargs=util.immutabledict(),
|
||||
reflect_args=(), reflect_kwargs=util.immutabledict(),
|
||||
naming_convention=None):
|
||||
"""Invoke a series of per-table migrations in batch.
|
||||
|
||||
Batch mode allows a series of operations specific to a table
|
||||
to be syntactically grouped together, and allows for alternate
|
||||
modes of table migration, in particular the "recreate" style of
|
||||
migration required by SQLite.
|
||||
|
||||
"recreate" style is as follows:
|
||||
|
||||
1. A new table is created with the new specification, based on the
|
||||
migration directives within the batch, using a temporary name.
|
||||
|
||||
2. the data copied from the existing table to the new table.
|
||||
|
||||
3. the existing table is dropped.
|
||||
|
||||
4. the new table is renamed to the existing table name.
|
||||
|
||||
The directive by default will only use "recreate" style on the
|
||||
SQLite backend, and only if directives are present which require
|
||||
this form, e.g. anything other than ``add_column()``. The batch
|
||||
operation on other backends will proceed using standard ALTER TABLE
|
||||
operations.
|
||||
|
||||
The method is used as a context manager, which returns an instance
|
||||
of :class:`.BatchOperations`; this object is the same as
|
||||
:class:`.Operations` except that table names and schema names
|
||||
are omitted. E.g.::
|
||||
|
||||
with op.batch_alter_table("some_table") as batch_op:
|
||||
batch_op.add_column(Column('foo', Integer))
|
||||
batch_op.drop_column('bar')
|
||||
|
||||
The operations within the context manager are invoked at once
|
||||
when the context is ended. When run against SQLite, if the
|
||||
migrations include operations not supported by SQLite's ALTER TABLE,
|
||||
the entire table will be copied to a new one with the new
|
||||
specification, moving all data across as well.
|
||||
|
||||
The copy operation by default uses reflection to retrieve the current
|
||||
structure of the table, and therefore :meth:`.batch_alter_table`
|
||||
in this mode requires that the migration is run in "online" mode.
|
||||
The ``copy_from`` parameter may be passed which refers to an existing
|
||||
:class:`.Table` object, which will bypass this reflection step.
|
||||
|
||||
.. note:: The table copy operation will currently not copy
|
||||
CHECK constraints, and may not copy UNIQUE constraints that are
|
||||
unnamed, as is possible on SQLite. See the section
|
||||
:ref:`sqlite_batch_constraints` for workarounds.
|
||||
|
||||
:param table_name: name of table
|
||||
:param schema: optional schema name.
|
||||
:param recreate: under what circumstances the table should be
|
||||
recreated. At its default of ``"auto"``, the SQLite dialect will
|
||||
recreate the table if any operations other than ``add_column()``,
|
||||
``create_index()``, or ``drop_index()`` are
|
||||
present. Other options include ``"always"`` and ``"never"``.
|
||||
:param copy_from: optional :class:`~sqlalchemy.schema.Table` object
|
||||
that will act as the structure of the table being copied. If omitted,
|
||||
table reflection is used to retrieve the structure of the table.
|
||||
|
||||
.. versionadded:: 0.7.6 Fully implemented the
|
||||
:paramref:`~.Operations.batch_alter_table.copy_from`
|
||||
parameter.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`batch_offline_mode`
|
||||
|
||||
:paramref:`~.Operations.batch_alter_table.reflect_args`
|
||||
|
||||
:paramref:`~.Operations.batch_alter_table.reflect_kwargs`
|
||||
|
||||
:param reflect_args: a sequence of additional positional arguments that
|
||||
will be applied to the table structure being reflected / copied;
|
||||
this may be used to pass column and constraint overrides to the
|
||||
table that will be reflected, in lieu of passing the whole
|
||||
:class:`~sqlalchemy.schema.Table` using
|
||||
:paramref:`~.Operations.batch_alter_table.copy_from`.
|
||||
|
||||
.. versionadded:: 0.7.1
|
||||
|
||||
:param reflect_kwargs: a dictionary of additional keyword arguments
|
||||
that will be applied to the table structure being copied; this may be
|
||||
used to pass additional table and reflection options to the table that
|
||||
will be reflected, in lieu of passing the whole
|
||||
:class:`~sqlalchemy.schema.Table` using
|
||||
:paramref:`~.Operations.batch_alter_table.copy_from`.
|
||||
|
||||
.. versionadded:: 0.7.1
|
||||
|
||||
:param table_args: a sequence of additional positional arguments that
|
||||
will be applied to the new :class:`~sqlalchemy.schema.Table` when
|
||||
created, in addition to those copied from the source table.
|
||||
This may be used to provide additional constraints such as CHECK
|
||||
constraints that may not be reflected.
|
||||
:param table_kwargs: a dictionary of additional keyword arguments
|
||||
that will be applied to the new :class:`~sqlalchemy.schema.Table`
|
||||
when created, in addition to those copied from the source table.
|
||||
This may be used to provide for additional table options that may
|
||||
not be reflected.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
:param naming_convention: a naming convention dictionary of the form
|
||||
described at :ref:`autogen_naming_conventions` which will be applied
|
||||
to the :class:`~sqlalchemy.schema.MetaData` during the reflection
|
||||
process. This is typically required if one wants to drop SQLite
|
||||
constraints, as these constraints will not have names when
|
||||
reflected on this backend. Requires SQLAlchemy **0.9.4** or greater.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`dropping_sqlite_foreign_keys`
|
||||
|
||||
.. versionadded:: 0.7.1
|
||||
|
||||
.. note:: batch mode requires SQLAlchemy 0.8 or above.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`batch_migrations`
|
||||
|
||||
"""
|
||||
impl = batch.BatchOperationsImpl(
|
||||
self, table_name, schema, recreate,
|
||||
copy_from, table_args, table_kwargs, reflect_args,
|
||||
reflect_kwargs, naming_convention)
|
||||
batch_op = BatchOperations(self.migration_context, impl=impl)
|
||||
yield batch_op
|
||||
impl.flush()
|
||||
|
||||
def get_context(self):
|
||||
"""Return the :class:`.MigrationContext` object that's
|
||||
currently in use.
|
||||
|
||||
"""
|
||||
|
||||
return self.migration_context
|
||||
|
||||
def invoke(self, operation):
|
||||
"""Given a :class:`.MigrateOperation`, invoke it in terms of
|
||||
this :class:`.Operations` instance.
|
||||
|
||||
.. versionadded:: 0.8.0
|
||||
|
||||
"""
|
||||
fn = self._to_impl.dispatch(
|
||||
operation, self.migration_context.impl.__dialect__)
|
||||
return fn(self, operation)
|
||||
|
||||
def f(self, name):
|
||||
"""Indicate a string name that has already had a naming convention
|
||||
applied to it.
|
||||
|
||||
This feature combines with the SQLAlchemy ``naming_convention`` feature
|
||||
to disambiguate constraint names that have already had naming
|
||||
conventions applied to them, versus those that have not. This is
|
||||
necessary in the case that the ``"%(constraint_name)s"`` token
|
||||
is used within a naming convention, so that it can be identified
|
||||
that this particular name should remain fixed.
|
||||
|
||||
If the :meth:`.Operations.f` is used on a constraint, the naming
|
||||
convention will not take effect::
|
||||
|
||||
op.add_column('t', 'x', Boolean(name=op.f('ck_bool_t_x')))
|
||||
|
||||
Above, the CHECK constraint generated will have the name
|
||||
``ck_bool_t_x`` regardless of whether or not a naming convention is
|
||||
in use.
|
||||
|
||||
Alternatively, if a naming convention is in use, and 'f' is not used,
|
||||
names will be converted along conventions. If the ``target_metadata``
|
||||
contains the naming convention
|
||||
``{"ck": "ck_bool_%(table_name)s_%(constraint_name)s"}``, then the
|
||||
output of the following:
|
||||
|
||||
op.add_column('t', 'x', Boolean(name='x'))
|
||||
|
||||
will be::
|
||||
|
||||
CONSTRAINT ck_bool_t_x CHECK (x in (1, 0)))
|
||||
|
||||
The function is rendered in the output of autogenerate when
|
||||
a particular constraint name is already converted, for SQLAlchemy
|
||||
version **0.9.4 and greater only**. Even though ``naming_convention``
|
||||
was introduced in 0.9.2, the string disambiguation service is new
|
||||
as of 0.9.4.
|
||||
|
||||
.. versionadded:: 0.6.4
|
||||
|
||||
"""
|
||||
if conv:
|
||||
return conv(name)
|
||||
else:
|
||||
raise NotImplementedError(
|
||||
"op.f() feature requires SQLAlchemy 0.9.4 or greater.")
|
||||
|
||||
def inline_literal(self, value, type_=None):
|
||||
"""Produce an 'inline literal' expression, suitable for
|
||||
using in an INSERT, UPDATE, or DELETE statement.
|
||||
|
||||
When using Alembic in "offline" mode, CRUD operations
|
||||
aren't compatible with SQLAlchemy's default behavior surrounding
|
||||
literal values,
|
||||
which is that they are converted into bound values and passed
|
||||
separately into the ``execute()`` method of the DBAPI cursor.
|
||||
An offline SQL
|
||||
script needs to have these rendered inline. While it should
|
||||
always be noted that inline literal values are an **enormous**
|
||||
security hole in an application that handles untrusted input,
|
||||
a schema migration is not run in this context, so
|
||||
literals are safe to render inline, with the caveat that
|
||||
advanced types like dates may not be supported directly
|
||||
by SQLAlchemy.
|
||||
|
||||
See :meth:`.execute` for an example usage of
|
||||
:meth:`.inline_literal`.
|
||||
|
||||
The environment can also be configured to attempt to render
|
||||
"literal" values inline automatically, for those simple types
|
||||
that are supported by the dialect; see
|
||||
:paramref:`.EnvironmentContext.configure.literal_binds` for this
|
||||
more recently added feature.
|
||||
|
||||
:param value: The value to render. Strings, integers, and simple
|
||||
numerics should be supported. Other types like boolean,
|
||||
dates, etc. may or may not be supported yet by various
|
||||
backends.
|
||||
:param type_: optional - a :class:`sqlalchemy.types.TypeEngine`
|
||||
subclass stating the type of this value. In SQLAlchemy
|
||||
expressions, this is usually derived automatically
|
||||
from the Python type of the value itself, as well as
|
||||
based on the context in which the value is used.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:paramref:`.EnvironmentContext.configure.literal_binds`
|
||||
|
||||
"""
|
||||
return sqla_compat._literal_bindparam(None, value, type_=type_)
|
||||
|
||||
def get_bind(self):
|
||||
"""Return the current 'bind'.
|
||||
|
||||
Under normal circumstances, this is the
|
||||
:class:`~sqlalchemy.engine.Connection` currently being used
|
||||
to emit SQL to the database.
|
||||
|
||||
In a SQL script context, this value is ``None``. [TODO: verify this]
|
||||
|
||||
"""
|
||||
return self.migration_context.impl.bind
|
||||
|
||||
|
||||
class BatchOperations(Operations):
|
||||
"""Modifies the interface :class:`.Operations` for batch mode.
|
||||
|
||||
This basically omits the ``table_name`` and ``schema`` parameters
|
||||
from associated methods, as these are a given when running under batch
|
||||
mode.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.Operations.batch_alter_table`
|
||||
|
||||
Note that as of 0.8, most of the methods on this class are produced
|
||||
dynamically using the :meth:`.Operations.register_operation`
|
||||
method.
|
||||
|
||||
"""
|
||||
|
||||
def _noop(self, operation):
|
||||
raise NotImplementedError(
|
||||
"The %s method does not apply to a batch table alter operation."
|
||||
% operation)
|
|
@ -1,372 +0,0 @@
|
|||
from sqlalchemy import Table, MetaData, Index, select, Column, \
|
||||
ForeignKeyConstraint, PrimaryKeyConstraint, cast, CheckConstraint
|
||||
from sqlalchemy import types as sqltypes
|
||||
from sqlalchemy import schema as sql_schema
|
||||
from sqlalchemy.util import OrderedDict
|
||||
from .. import util
|
||||
if util.sqla_08:
|
||||
from sqlalchemy.events import SchemaEventTarget
|
||||
from ..util.sqla_compat import _columns_for_constraint, \
|
||||
_is_type_bound, _fk_is_self_referential
|
||||
|
||||
|
||||
class BatchOperationsImpl(object):
|
||||
def __init__(self, operations, table_name, schema, recreate,
|
||||
copy_from, table_args, table_kwargs,
|
||||
reflect_args, reflect_kwargs, naming_convention):
|
||||
if not util.sqla_08:
|
||||
raise NotImplementedError(
|
||||
"batch mode requires SQLAlchemy 0.8 or greater.")
|
||||
self.operations = operations
|
||||
self.table_name = table_name
|
||||
self.schema = schema
|
||||
if recreate not in ('auto', 'always', 'never'):
|
||||
raise ValueError(
|
||||
"recreate may be one of 'auto', 'always', or 'never'.")
|
||||
self.recreate = recreate
|
||||
self.copy_from = copy_from
|
||||
self.table_args = table_args
|
||||
self.table_kwargs = dict(table_kwargs)
|
||||
self.reflect_args = reflect_args
|
||||
self.reflect_kwargs = reflect_kwargs
|
||||
self.naming_convention = naming_convention
|
||||
self.batch = []
|
||||
|
||||
@property
|
||||
def dialect(self):
|
||||
return self.operations.impl.dialect
|
||||
|
||||
@property
|
||||
def impl(self):
|
||||
return self.operations.impl
|
||||
|
||||
def _should_recreate(self):
|
||||
if self.recreate == 'auto':
|
||||
return self.operations.impl.requires_recreate_in_batch(self)
|
||||
elif self.recreate == 'always':
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def flush(self):
|
||||
should_recreate = self._should_recreate()
|
||||
|
||||
if not should_recreate:
|
||||
for opname, arg, kw in self.batch:
|
||||
fn = getattr(self.operations.impl, opname)
|
||||
fn(*arg, **kw)
|
||||
else:
|
||||
if self.naming_convention:
|
||||
m1 = MetaData(naming_convention=self.naming_convention)
|
||||
else:
|
||||
m1 = MetaData()
|
||||
|
||||
if self.copy_from is not None:
|
||||
existing_table = self.copy_from
|
||||
reflected = False
|
||||
else:
|
||||
existing_table = Table(
|
||||
self.table_name, m1,
|
||||
schema=self.schema,
|
||||
autoload=True,
|
||||
autoload_with=self.operations.get_bind(),
|
||||
*self.reflect_args, **self.reflect_kwargs)
|
||||
reflected = True
|
||||
|
||||
batch_impl = ApplyBatchImpl(
|
||||
existing_table, self.table_args, self.table_kwargs, reflected)
|
||||
for opname, arg, kw in self.batch:
|
||||
fn = getattr(batch_impl, opname)
|
||||
fn(*arg, **kw)
|
||||
|
||||
batch_impl._create(self.impl)
|
||||
|
||||
def alter_column(self, *arg, **kw):
|
||||
self.batch.append(("alter_column", arg, kw))
|
||||
|
||||
def add_column(self, *arg, **kw):
|
||||
self.batch.append(("add_column", arg, kw))
|
||||
|
||||
def drop_column(self, *arg, **kw):
|
||||
self.batch.append(("drop_column", arg, kw))
|
||||
|
||||
def add_constraint(self, const):
|
||||
self.batch.append(("add_constraint", (const,), {}))
|
||||
|
||||
def drop_constraint(self, const):
|
||||
self.batch.append(("drop_constraint", (const, ), {}))
|
||||
|
||||
def rename_table(self, *arg, **kw):
|
||||
self.batch.append(("rename_table", arg, kw))
|
||||
|
||||
def create_index(self, idx):
|
||||
self.batch.append(("create_index", (idx,), {}))
|
||||
|
||||
def drop_index(self, idx):
|
||||
self.batch.append(("drop_index", (idx,), {}))
|
||||
|
||||
def create_table(self, table):
|
||||
raise NotImplementedError("Can't create table in batch mode")
|
||||
|
||||
def drop_table(self, table):
|
||||
raise NotImplementedError("Can't drop table in batch mode")
|
||||
|
||||
|
||||
class ApplyBatchImpl(object):
|
||||
def __init__(self, table, table_args, table_kwargs, reflected):
|
||||
self.table = table # this is a Table object
|
||||
self.table_args = table_args
|
||||
self.table_kwargs = table_kwargs
|
||||
self.new_table = None
|
||||
self.column_transfers = OrderedDict(
|
||||
(c.name, {'expr': c}) for c in self.table.c
|
||||
)
|
||||
self.reflected = reflected
|
||||
self._grab_table_elements()
|
||||
|
||||
def _grab_table_elements(self):
|
||||
schema = self.table.schema
|
||||
self.columns = OrderedDict()
|
||||
for c in self.table.c:
|
||||
c_copy = c.copy(schema=schema)
|
||||
c_copy.unique = c_copy.index = False
|
||||
# ensure that the type object was copied,
|
||||
# as we may need to modify it in-place
|
||||
if isinstance(c.type, SchemaEventTarget):
|
||||
assert c_copy.type is not c.type
|
||||
self.columns[c.name] = c_copy
|
||||
self.named_constraints = {}
|
||||
self.unnamed_constraints = []
|
||||
self.indexes = {}
|
||||
self.new_indexes = {}
|
||||
for const in self.table.constraints:
|
||||
if _is_type_bound(const):
|
||||
continue
|
||||
elif self.reflected and isinstance(const, CheckConstraint):
|
||||
# TODO: we are skipping reflected CheckConstraint because
|
||||
# we have no way to determine _is_type_bound() for these.
|
||||
pass
|
||||
elif const.name:
|
||||
self.named_constraints[const.name] = const
|
||||
else:
|
||||
self.unnamed_constraints.append(const)
|
||||
|
||||
for idx in self.table.indexes:
|
||||
self.indexes[idx.name] = idx
|
||||
|
||||
for k in self.table.kwargs:
|
||||
self.table_kwargs.setdefault(k, self.table.kwargs[k])
|
||||
|
||||
def _transfer_elements_to_new_table(self):
|
||||
assert self.new_table is None, "Can only create new table once"
|
||||
|
||||
m = MetaData()
|
||||
schema = self.table.schema
|
||||
|
||||
self.new_table = new_table = Table(
|
||||
'_alembic_batch_temp', m,
|
||||
*(list(self.columns.values()) + list(self.table_args)),
|
||||
schema=schema,
|
||||
**self.table_kwargs)
|
||||
|
||||
for const in list(self.named_constraints.values()) + \
|
||||
self.unnamed_constraints:
|
||||
|
||||
const_columns = set([
|
||||
c.key for c in _columns_for_constraint(const)])
|
||||
|
||||
if not const_columns.issubset(self.column_transfers):
|
||||
continue
|
||||
|
||||
if isinstance(const, ForeignKeyConstraint):
|
||||
if _fk_is_self_referential(const):
|
||||
# for self-referential constraint, refer to the
|
||||
# *original* table name, and not _alembic_batch_temp.
|
||||
# This is consistent with how we're handling
|
||||
# FK constraints from other tables; we assume SQLite
|
||||
# no foreign keys just keeps the names unchanged, so
|
||||
# when we rename back, they match again.
|
||||
const_copy = const.copy(
|
||||
schema=schema, target_table=self.table)
|
||||
else:
|
||||
# "target_table" for ForeignKeyConstraint.copy() is
|
||||
# only used if the FK is detected as being
|
||||
# self-referential, which we are handling above.
|
||||
const_copy = const.copy(schema=schema)
|
||||
else:
|
||||
const_copy = const.copy(schema=schema, target_table=new_table)
|
||||
if isinstance(const, ForeignKeyConstraint):
|
||||
self._setup_referent(m, const)
|
||||
new_table.append_constraint(const_copy)
|
||||
|
||||
def _gather_indexes_from_both_tables(self):
|
||||
idx = []
|
||||
idx.extend(self.indexes.values())
|
||||
for index in self.new_indexes.values():
|
||||
idx.append(
|
||||
Index(
|
||||
index.name,
|
||||
unique=index.unique,
|
||||
*[self.new_table.c[col] for col in index.columns.keys()],
|
||||
**index.kwargs)
|
||||
)
|
||||
return idx
|
||||
|
||||
def _setup_referent(self, metadata, constraint):
|
||||
spec = constraint.elements[0]._get_colspec()
|
||||
parts = spec.split(".")
|
||||
tname = parts[-2]
|
||||
if len(parts) == 3:
|
||||
referent_schema = parts[0]
|
||||
else:
|
||||
referent_schema = None
|
||||
|
||||
if tname != '_alembic_batch_temp':
|
||||
key = sql_schema._get_table_key(tname, referent_schema)
|
||||
if key in metadata.tables:
|
||||
t = metadata.tables[key]
|
||||
for elem in constraint.elements:
|
||||
colname = elem._get_colspec().split(".")[-1]
|
||||
if not t.c.contains_column(colname):
|
||||
t.append_column(
|
||||
Column(colname, sqltypes.NULLTYPE)
|
||||
)
|
||||
else:
|
||||
Table(
|
||||
tname, metadata,
|
||||
*[Column(n, sqltypes.NULLTYPE) for n in
|
||||
[elem._get_colspec().split(".")[-1]
|
||||
for elem in constraint.elements]],
|
||||
schema=referent_schema)
|
||||
|
||||
def _create(self, op_impl):
|
||||
self._transfer_elements_to_new_table()
|
||||
|
||||
op_impl.prep_table_for_batch(self.table)
|
||||
op_impl.create_table(self.new_table)
|
||||
|
||||
try:
|
||||
op_impl._exec(
|
||||
self.new_table.insert(inline=True).from_select(
|
||||
list(k for k, transfer in
|
||||
self.column_transfers.items() if 'expr' in transfer),
|
||||
select([
|
||||
transfer['expr']
|
||||
for transfer in self.column_transfers.values()
|
||||
if 'expr' in transfer
|
||||
])
|
||||
)
|
||||
)
|
||||
op_impl.drop_table(self.table)
|
||||
except:
|
||||
op_impl.drop_table(self.new_table)
|
||||
raise
|
||||
else:
|
||||
op_impl.rename_table(
|
||||
"_alembic_batch_temp",
|
||||
self.table.name,
|
||||
schema=self.table.schema
|
||||
)
|
||||
self.new_table.name = self.table.name
|
||||
try:
|
||||
for idx in self._gather_indexes_from_both_tables():
|
||||
op_impl.create_index(idx)
|
||||
finally:
|
||||
self.new_table.name = "_alembic_batch_temp"
|
||||
|
||||
def alter_column(self, table_name, column_name,
|
||||
nullable=None,
|
||||
server_default=False,
|
||||
name=None,
|
||||
type_=None,
|
||||
autoincrement=None,
|
||||
**kw
|
||||
):
|
||||
existing = self.columns[column_name]
|
||||
existing_transfer = self.column_transfers[column_name]
|
||||
if name is not None and name != column_name:
|
||||
# note that we don't change '.key' - we keep referring
|
||||
# to the renamed column by its old key in _create(). neat!
|
||||
existing.name = name
|
||||
existing_transfer["name"] = name
|
||||
|
||||
if type_ is not None:
|
||||
type_ = sqltypes.to_instance(type_)
|
||||
# old type is being discarded so turn off eventing
|
||||
# rules. Alternatively we can
|
||||
# erase the events set up by this type, but this is simpler.
|
||||
# we also ignore the drop_constraint that will come here from
|
||||
# Operations.implementation_for(alter_column)
|
||||
if isinstance(existing.type, SchemaEventTarget):
|
||||
existing.type._create_events = \
|
||||
existing.type.create_constraint = False
|
||||
|
||||
if existing.type._type_affinity is not type_._type_affinity:
|
||||
existing_transfer["expr"] = cast(
|
||||
existing_transfer["expr"], type_)
|
||||
|
||||
existing.type = type_
|
||||
|
||||
# we *dont* however set events for the new type, because
|
||||
# alter_column is invoked from
|
||||
# Operations.implementation_for(alter_column) which already
|
||||
# will emit an add_constraint()
|
||||
|
||||
if nullable is not None:
|
||||
existing.nullable = nullable
|
||||
if server_default is not False:
|
||||
if server_default is None:
|
||||
existing.server_default = None
|
||||
else:
|
||||
sql_schema.DefaultClause(server_default)._set_parent(existing)
|
||||
if autoincrement is not None:
|
||||
existing.autoincrement = bool(autoincrement)
|
||||
|
||||
def add_column(self, table_name, column, **kw):
|
||||
# we copy the column because operations.add_column()
|
||||
# gives us a Column that is part of a Table already.
|
||||
self.columns[column.name] = column.copy(schema=self.table.schema)
|
||||
self.column_transfers[column.name] = {}
|
||||
|
||||
def drop_column(self, table_name, column, **kw):
|
||||
del self.columns[column.name]
|
||||
del self.column_transfers[column.name]
|
||||
|
||||
def add_constraint(self, const):
|
||||
if not const.name:
|
||||
raise ValueError("Constraint must have a name")
|
||||
if isinstance(const, sql_schema.PrimaryKeyConstraint):
|
||||
if self.table.primary_key in self.unnamed_constraints:
|
||||
self.unnamed_constraints.remove(self.table.primary_key)
|
||||
|
||||
self.named_constraints[const.name] = const
|
||||
|
||||
def drop_constraint(self, const):
|
||||
if not const.name:
|
||||
raise ValueError("Constraint must have a name")
|
||||
try:
|
||||
const = self.named_constraints.pop(const.name)
|
||||
except KeyError:
|
||||
if _is_type_bound(const):
|
||||
# type-bound constraints are only included in the new
|
||||
# table via their type object in any case, so ignore the
|
||||
# drop_constraint() that comes here via the
|
||||
# Operations.implementation_for(alter_column)
|
||||
return
|
||||
raise ValueError("No such constraint: '%s'" % const.name)
|
||||
else:
|
||||
if isinstance(const, PrimaryKeyConstraint):
|
||||
for col in const.columns:
|
||||
self.columns[col.name].primary_key = False
|
||||
|
||||
def create_index(self, idx):
|
||||
self.new_indexes[idx.name] = idx
|
||||
|
||||
def drop_index(self, idx):
|
||||
try:
|
||||
del self.indexes[idx.name]
|
||||
except KeyError:
|
||||
raise ValueError("No such index: '%s'" % idx.name)
|
||||
|
||||
def rename_table(self, *arg, **kw):
|
||||
raise NotImplementedError("TODO")
|
File diff suppressed because it is too large
Load Diff
|
@ -1,159 +0,0 @@
|
|||
from sqlalchemy import schema as sa_schema
|
||||
from sqlalchemy.types import NULLTYPE, Integer
|
||||
from ..util.compat import string_types
|
||||
from .. import util
|
||||
|
||||
|
||||
class SchemaObjects(object):
|
||||
|
||||
def __init__(self, migration_context=None):
|
||||
self.migration_context = migration_context
|
||||
|
||||
def primary_key_constraint(self, name, table_name, cols, schema=None):
|
||||
m = self.metadata()
|
||||
columns = [sa_schema.Column(n, NULLTYPE) for n in cols]
|
||||
t = sa_schema.Table(
|
||||
table_name, m,
|
||||
*columns,
|
||||
schema=schema)
|
||||
p = sa_schema.PrimaryKeyConstraint(
|
||||
*[t.c[n] for n in cols], name=name)
|
||||
t.append_constraint(p)
|
||||
return p
|
||||
|
||||
def foreign_key_constraint(
|
||||
self, name, source, referent,
|
||||
local_cols, remote_cols,
|
||||
onupdate=None, ondelete=None,
|
||||
deferrable=None, source_schema=None,
|
||||
referent_schema=None, initially=None,
|
||||
match=None, **dialect_kw):
|
||||
m = self.metadata()
|
||||
if source == referent and source_schema == referent_schema:
|
||||
t1_cols = local_cols + remote_cols
|
||||
else:
|
||||
t1_cols = local_cols
|
||||
sa_schema.Table(
|
||||
referent, m,
|
||||
*[sa_schema.Column(n, NULLTYPE) for n in remote_cols],
|
||||
schema=referent_schema)
|
||||
|
||||
t1 = sa_schema.Table(
|
||||
source, m,
|
||||
*[sa_schema.Column(n, NULLTYPE) for n in t1_cols],
|
||||
schema=source_schema)
|
||||
|
||||
tname = "%s.%s" % (referent_schema, referent) if referent_schema \
|
||||
else referent
|
||||
|
||||
if util.sqla_08:
|
||||
# "match" kw unsupported in 0.7
|
||||
dialect_kw['match'] = match
|
||||
|
||||
f = sa_schema.ForeignKeyConstraint(local_cols,
|
||||
["%s.%s" % (tname, n)
|
||||
for n in remote_cols],
|
||||
name=name,
|
||||
onupdate=onupdate,
|
||||
ondelete=ondelete,
|
||||
deferrable=deferrable,
|
||||
initially=initially,
|
||||
**dialect_kw
|
||||
)
|
||||
t1.append_constraint(f)
|
||||
|
||||
return f
|
||||
|
||||
def unique_constraint(self, name, source, local_cols, schema=None, **kw):
|
||||
t = sa_schema.Table(
|
||||
source, self.metadata(),
|
||||
*[sa_schema.Column(n, NULLTYPE) for n in local_cols],
|
||||
schema=schema)
|
||||
kw['name'] = name
|
||||
uq = sa_schema.UniqueConstraint(*[t.c[n] for n in local_cols], **kw)
|
||||
# TODO: need event tests to ensure the event
|
||||
# is fired off here
|
||||
t.append_constraint(uq)
|
||||
return uq
|
||||
|
||||
def check_constraint(self, name, source, condition, schema=None, **kw):
|
||||
t = sa_schema.Table(source, self.metadata(),
|
||||
sa_schema.Column('x', Integer), schema=schema)
|
||||
ck = sa_schema.CheckConstraint(condition, name=name, **kw)
|
||||
t.append_constraint(ck)
|
||||
return ck
|
||||
|
||||
def generic_constraint(self, name, table_name, type_, schema=None, **kw):
|
||||
t = self.table(table_name, schema=schema)
|
||||
types = {
|
||||
'foreignkey': lambda name: sa_schema.ForeignKeyConstraint(
|
||||
[], [], name=name),
|
||||
'primary': sa_schema.PrimaryKeyConstraint,
|
||||
'unique': sa_schema.UniqueConstraint,
|
||||
'check': lambda name: sa_schema.CheckConstraint("", name=name),
|
||||
None: sa_schema.Constraint
|
||||
}
|
||||
try:
|
||||
const = types[type_]
|
||||
except KeyError:
|
||||
raise TypeError("'type' can be one of %s" %
|
||||
", ".join(sorted(repr(x) for x in types)))
|
||||
else:
|
||||
const = const(name=name)
|
||||
t.append_constraint(const)
|
||||
return const
|
||||
|
||||
def metadata(self):
|
||||
kw = {}
|
||||
if self.migration_context is not None and \
|
||||
'target_metadata' in self.migration_context.opts:
|
||||
mt = self.migration_context.opts['target_metadata']
|
||||
if hasattr(mt, 'naming_convention'):
|
||||
kw['naming_convention'] = mt.naming_convention
|
||||
return sa_schema.MetaData(**kw)
|
||||
|
||||
def table(self, name, *columns, **kw):
|
||||
m = self.metadata()
|
||||
t = sa_schema.Table(name, m, *columns, **kw)
|
||||
for f in t.foreign_keys:
|
||||
self._ensure_table_for_fk(m, f)
|
||||
return t
|
||||
|
||||
def column(self, name, type_, **kw):
|
||||
return sa_schema.Column(name, type_, **kw)
|
||||
|
||||
def index(self, name, tablename, columns, schema=None, **kw):
|
||||
t = sa_schema.Table(
|
||||
tablename or 'no_table', self.metadata(),
|
||||
schema=schema
|
||||
)
|
||||
idx = sa_schema.Index(
|
||||
name,
|
||||
*[util.sqla_compat._textual_index_column(t, n) for n in columns],
|
||||
**kw)
|
||||
return idx
|
||||
|
||||
def _parse_table_key(self, table_key):
|
||||
if '.' in table_key:
|
||||
tokens = table_key.split('.')
|
||||
sname = ".".join(tokens[0:-1])
|
||||
tname = tokens[-1]
|
||||
else:
|
||||
tname = table_key
|
||||
sname = None
|
||||
return (sname, tname)
|
||||
|
||||
def _ensure_table_for_fk(self, metadata, fk):
|
||||
"""create a placeholder Table object for the referent of a
|
||||
ForeignKey.
|
||||
|
||||
"""
|
||||
if isinstance(fk._colspec, string_types):
|
||||
table_key, cname = fk._colspec.rsplit('.', 1)
|
||||
sname, tname = self._parse_table_key(table_key)
|
||||
if table_key not in metadata.tables:
|
||||
rel_t = sa_schema.Table(tname, metadata, schema=sname)
|
||||
else:
|
||||
rel_t = metadata.tables[table_key]
|
||||
if cname not in rel_t.c:
|
||||
rel_t.append_column(sa_schema.Column(cname, NULLTYPE))
|
|
@ -1,162 +0,0 @@
|
|||
from . import ops
|
||||
|
||||
from . import Operations
|
||||
from sqlalchemy import schema as sa_schema
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.AlterColumnOp)
|
||||
def alter_column(operations, operation):
|
||||
|
||||
compiler = operations.impl.dialect.statement_compiler(
|
||||
operations.impl.dialect,
|
||||
None
|
||||
)
|
||||
|
||||
existing_type = operation.existing_type
|
||||
existing_nullable = operation.existing_nullable
|
||||
existing_server_default = operation.existing_server_default
|
||||
type_ = operation.modify_type
|
||||
column_name = operation.column_name
|
||||
table_name = operation.table_name
|
||||
schema = operation.schema
|
||||
server_default = operation.modify_server_default
|
||||
new_column_name = operation.modify_name
|
||||
nullable = operation.modify_nullable
|
||||
|
||||
def _count_constraint(constraint):
|
||||
return not isinstance(
|
||||
constraint,
|
||||
sa_schema.PrimaryKeyConstraint) and \
|
||||
(not constraint._create_rule or
|
||||
constraint._create_rule(compiler))
|
||||
|
||||
if existing_type and type_:
|
||||
t = operations.schema_obj.table(
|
||||
table_name,
|
||||
sa_schema.Column(column_name, existing_type),
|
||||
schema=schema
|
||||
)
|
||||
for constraint in t.constraints:
|
||||
if _count_constraint(constraint):
|
||||
operations.impl.drop_constraint(constraint)
|
||||
|
||||
operations.impl.alter_column(
|
||||
table_name, column_name,
|
||||
nullable=nullable,
|
||||
server_default=server_default,
|
||||
name=new_column_name,
|
||||
type_=type_,
|
||||
schema=schema,
|
||||
existing_type=existing_type,
|
||||
existing_server_default=existing_server_default,
|
||||
existing_nullable=existing_nullable,
|
||||
**operation.kw
|
||||
)
|
||||
|
||||
if type_:
|
||||
t = operations.schema_obj.table(
|
||||
table_name,
|
||||
operations.schema_obj.column(column_name, type_),
|
||||
schema=schema
|
||||
)
|
||||
for constraint in t.constraints:
|
||||
if _count_constraint(constraint):
|
||||
operations.impl.add_constraint(constraint)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.DropTableOp)
|
||||
def drop_table(operations, operation):
|
||||
operations.impl.drop_table(
|
||||
operation.to_table(operations.migration_context)
|
||||
)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.DropColumnOp)
|
||||
def drop_column(operations, operation):
|
||||
column = operation.to_column(operations.migration_context)
|
||||
operations.impl.drop_column(
|
||||
operation.table_name,
|
||||
column,
|
||||
schema=operation.schema,
|
||||
**operation.kw
|
||||
)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.CreateIndexOp)
|
||||
def create_index(operations, operation):
|
||||
idx = operation.to_index(operations.migration_context)
|
||||
operations.impl.create_index(idx)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.DropIndexOp)
|
||||
def drop_index(operations, operation):
|
||||
operations.impl.drop_index(
|
||||
operation.to_index(operations.migration_context)
|
||||
)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.CreateTableOp)
|
||||
def create_table(operations, operation):
|
||||
table = operation.to_table(operations.migration_context)
|
||||
operations.impl.create_table(table)
|
||||
return table
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.RenameTableOp)
|
||||
def rename_table(operations, operation):
|
||||
operations.impl.rename_table(
|
||||
operation.table_name,
|
||||
operation.new_table_name,
|
||||
schema=operation.schema)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.AddColumnOp)
|
||||
def add_column(operations, operation):
|
||||
table_name = operation.table_name
|
||||
column = operation.column
|
||||
schema = operation.schema
|
||||
|
||||
t = operations.schema_obj.table(table_name, column, schema=schema)
|
||||
operations.impl.add_column(
|
||||
table_name,
|
||||
column,
|
||||
schema=schema
|
||||
)
|
||||
for constraint in t.constraints:
|
||||
if not isinstance(constraint, sa_schema.PrimaryKeyConstraint):
|
||||
operations.impl.add_constraint(constraint)
|
||||
for index in t.indexes:
|
||||
operations.impl.create_index(index)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.AddConstraintOp)
|
||||
def create_constraint(operations, operation):
|
||||
operations.impl.add_constraint(
|
||||
operation.to_constraint(operations.migration_context)
|
||||
)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.DropConstraintOp)
|
||||
def drop_constraint(operations, operation):
|
||||
operations.impl.drop_constraint(
|
||||
operations.schema_obj.generic_constraint(
|
||||
operation.constraint_name,
|
||||
operation.table_name,
|
||||
operation.constraint_type,
|
||||
schema=operation.schema,
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.BulkInsertOp)
|
||||
def bulk_insert(operations, operation):
|
||||
operations.impl.bulk_insert(
|
||||
operation.table, operation.rows, multiinsert=operation.multiinsert)
|
||||
|
||||
|
||||
@Operations.implementation_for(ops.ExecuteSQLOp)
|
||||
def execute_sql(operations, operation):
|
||||
operations.migration_context.impl.execute(
|
||||
operation.sqltext,
|
||||
execution_options=operation.execution_options
|
||||
)
|
|
@ -1,936 +0,0 @@
|
|||
from ..operations import Operations
|
||||
from .migration import MigrationContext
|
||||
from .. import util
|
||||
|
||||
|
||||
class EnvironmentContext(util.ModuleClsProxy):
|
||||
|
||||
"""A configurational facade made available in an ``env.py`` script.
|
||||
|
||||
The :class:`.EnvironmentContext` acts as a *facade* to the more
|
||||
nuts-and-bolts objects of :class:`.MigrationContext` as well as certain
|
||||
aspects of :class:`.Config`,
|
||||
within the context of the ``env.py`` script that is invoked by
|
||||
most Alembic commands.
|
||||
|
||||
:class:`.EnvironmentContext` is normally instantiated
|
||||
when a command in :mod:`alembic.command` is run. It then makes
|
||||
itself available in the ``alembic.context`` module for the scope
|
||||
of the command. From within an ``env.py`` script, the current
|
||||
:class:`.EnvironmentContext` is available by importing this module.
|
||||
|
||||
:class:`.EnvironmentContext` also supports programmatic usage.
|
||||
At this level, it acts as a Python context manager, that is, is
|
||||
intended to be used using the
|
||||
``with:`` statement. A typical use of :class:`.EnvironmentContext`::
|
||||
|
||||
from alembic.config import Config
|
||||
from alembic.script import ScriptDirectory
|
||||
|
||||
config = Config()
|
||||
config.set_main_option("script_location", "myapp:migrations")
|
||||
script = ScriptDirectory.from_config(config)
|
||||
|
||||
def my_function(rev, context):
|
||||
'''do something with revision "rev", which
|
||||
will be the current database revision,
|
||||
and "context", which is the MigrationContext
|
||||
that the env.py will create'''
|
||||
|
||||
with EnvironmentContext(
|
||||
config,
|
||||
script,
|
||||
fn = my_function,
|
||||
as_sql = False,
|
||||
starting_rev = 'base',
|
||||
destination_rev = 'head',
|
||||
tag = "sometag"
|
||||
):
|
||||
script.run_env()
|
||||
|
||||
The above script will invoke the ``env.py`` script
|
||||
within the migration environment. If and when ``env.py``
|
||||
calls :meth:`.MigrationContext.run_migrations`, the
|
||||
``my_function()`` function above will be called
|
||||
by the :class:`.MigrationContext`, given the context
|
||||
itself as well as the current revision in the database.
|
||||
|
||||
.. note::
|
||||
|
||||
For most API usages other than full blown
|
||||
invocation of migration scripts, the :class:`.MigrationContext`
|
||||
and :class:`.ScriptDirectory` objects can be created and
|
||||
used directly. The :class:`.EnvironmentContext` object
|
||||
is *only* needed when you need to actually invoke the
|
||||
``env.py`` module present in the migration environment.
|
||||
|
||||
"""
|
||||
|
||||
_migration_context = None
|
||||
|
||||
config = None
|
||||
"""An instance of :class:`.Config` representing the
|
||||
configuration file contents as well as other variables
|
||||
set programmatically within it."""
|
||||
|
||||
script = None
|
||||
"""An instance of :class:`.ScriptDirectory` which provides
|
||||
programmatic access to version files within the ``versions/``
|
||||
directory.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, config, script, **kw):
|
||||
"""Construct a new :class:`.EnvironmentContext`.
|
||||
|
||||
:param config: a :class:`.Config` instance.
|
||||
:param script: a :class:`.ScriptDirectory` instance.
|
||||
:param \**kw: keyword options that will be ultimately
|
||||
passed along to the :class:`.MigrationContext` when
|
||||
:meth:`.EnvironmentContext.configure` is called.
|
||||
|
||||
"""
|
||||
self.config = config
|
||||
self.script = script
|
||||
self.context_opts = kw
|
||||
|
||||
def __enter__(self):
|
||||
"""Establish a context which provides a
|
||||
:class:`.EnvironmentContext` object to
|
||||
env.py scripts.
|
||||
|
||||
The :class:`.EnvironmentContext` will
|
||||
be made available as ``from alembic import context``.
|
||||
|
||||
"""
|
||||
self._install_proxy()
|
||||
return self
|
||||
|
||||
def __exit__(self, *arg, **kw):
|
||||
self._remove_proxy()
|
||||
|
||||
def is_offline_mode(self):
|
||||
"""Return True if the current migrations environment
|
||||
is running in "offline mode".
|
||||
|
||||
This is ``True`` or ``False`` depending
|
||||
on the the ``--sql`` flag passed.
|
||||
|
||||
This function does not require that the :class:`.MigrationContext`
|
||||
has been configured.
|
||||
|
||||
"""
|
||||
return self.context_opts.get('as_sql', False)
|
||||
|
||||
def is_transactional_ddl(self):
|
||||
"""Return True if the context is configured to expect a
|
||||
transactional DDL capable backend.
|
||||
|
||||
This defaults to the type of database in use, and
|
||||
can be overridden by the ``transactional_ddl`` argument
|
||||
to :meth:`.configure`
|
||||
|
||||
This function requires that a :class:`.MigrationContext`
|
||||
has first been made available via :meth:`.configure`.
|
||||
|
||||
"""
|
||||
return self.get_context().impl.transactional_ddl
|
||||
|
||||
def requires_connection(self):
|
||||
return not self.is_offline_mode()
|
||||
|
||||
def get_head_revision(self):
|
||||
"""Return the hex identifier of the 'head' script revision.
|
||||
|
||||
If the script directory has multiple heads, this
|
||||
method raises a :class:`.CommandError`;
|
||||
:meth:`.EnvironmentContext.get_head_revisions` should be preferred.
|
||||
|
||||
This function does not require that the :class:`.MigrationContext`
|
||||
has been configured.
|
||||
|
||||
.. seealso:: :meth:`.EnvironmentContext.get_head_revisions`
|
||||
|
||||
"""
|
||||
return self.script.as_revision_number("head")
|
||||
|
||||
def get_head_revisions(self):
|
||||
"""Return the hex identifier of the 'heads' script revision(s).
|
||||
|
||||
This returns a tuple containing the version number of all
|
||||
heads in the script directory.
|
||||
|
||||
This function does not require that the :class:`.MigrationContext`
|
||||
has been configured.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
"""
|
||||
return self.script.as_revision_number("heads")
|
||||
|
||||
def get_starting_revision_argument(self):
|
||||
"""Return the 'starting revision' argument,
|
||||
if the revision was passed using ``start:end``.
|
||||
|
||||
This is only meaningful in "offline" mode.
|
||||
Returns ``None`` if no value is available
|
||||
or was configured.
|
||||
|
||||
This function does not require that the :class:`.MigrationContext`
|
||||
has been configured.
|
||||
|
||||
"""
|
||||
if self._migration_context is not None:
|
||||
return self.script.as_revision_number(
|
||||
self.get_context()._start_from_rev)
|
||||
elif 'starting_rev' in self.context_opts:
|
||||
return self.script.as_revision_number(
|
||||
self.context_opts['starting_rev'])
|
||||
else:
|
||||
# this should raise only in the case that a command
|
||||
# is being run where the "starting rev" is never applicable;
|
||||
# this is to catch scripts which rely upon this in
|
||||
# non-sql mode or similar
|
||||
raise util.CommandError(
|
||||
"No starting revision argument is available.")
|
||||
|
||||
def get_revision_argument(self):
|
||||
"""Get the 'destination' revision argument.
|
||||
|
||||
This is typically the argument passed to the
|
||||
``upgrade`` or ``downgrade`` command.
|
||||
|
||||
If it was specified as ``head``, the actual
|
||||
version number is returned; if specified
|
||||
as ``base``, ``None`` is returned.
|
||||
|
||||
This function does not require that the :class:`.MigrationContext`
|
||||
has been configured.
|
||||
|
||||
"""
|
||||
return self.script.as_revision_number(
|
||||
self.context_opts['destination_rev'])
|
||||
|
||||
def get_tag_argument(self):
|
||||
"""Return the value passed for the ``--tag`` argument, if any.
|
||||
|
||||
The ``--tag`` argument is not used directly by Alembic,
|
||||
but is available for custom ``env.py`` configurations that
|
||||
wish to use it; particularly for offline generation scripts
|
||||
that wish to generate tagged filenames.
|
||||
|
||||
This function does not require that the :class:`.MigrationContext`
|
||||
has been configured.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.EnvironmentContext.get_x_argument` - a newer and more
|
||||
open ended system of extending ``env.py`` scripts via the command
|
||||
line.
|
||||
|
||||
"""
|
||||
return self.context_opts.get('tag', None)
|
||||
|
||||
def get_x_argument(self, as_dictionary=False):
|
||||
"""Return the value(s) passed for the ``-x`` argument, if any.
|
||||
|
||||
The ``-x`` argument is an open ended flag that allows any user-defined
|
||||
value or values to be passed on the command line, then available
|
||||
here for consumption by a custom ``env.py`` script.
|
||||
|
||||
The return value is a list, returned directly from the ``argparse``
|
||||
structure. If ``as_dictionary=True`` is passed, the ``x`` arguments
|
||||
are parsed using ``key=value`` format into a dictionary that is
|
||||
then returned.
|
||||
|
||||
For example, to support passing a database URL on the command line,
|
||||
the standard ``env.py`` script can be modified like this::
|
||||
|
||||
cmd_line_url = context.get_x_argument(
|
||||
as_dictionary=True).get('dbname')
|
||||
if cmd_line_url:
|
||||
engine = create_engine(cmd_line_url)
|
||||
else:
|
||||
engine = engine_from_config(
|
||||
config.get_section(config.config_ini_section),
|
||||
prefix='sqlalchemy.',
|
||||
poolclass=pool.NullPool)
|
||||
|
||||
This then takes effect by running the ``alembic`` script as::
|
||||
|
||||
alembic -x dbname=postgresql://user:pass@host/dbname upgrade head
|
||||
|
||||
This function does not require that the :class:`.MigrationContext`
|
||||
has been configured.
|
||||
|
||||
.. versionadded:: 0.6.0
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.EnvironmentContext.get_tag_argument`
|
||||
|
||||
:attr:`.Config.cmd_opts`
|
||||
|
||||
"""
|
||||
if self.config.cmd_opts is not None:
|
||||
value = self.config.cmd_opts.x or []
|
||||
else:
|
||||
value = []
|
||||
if as_dictionary:
|
||||
value = dict(
|
||||
arg.split('=', 1) for arg in value
|
||||
)
|
||||
return value
|
||||
|
||||
def configure(self,
|
||||
connection=None,
|
||||
url=None,
|
||||
dialect_name=None,
|
||||
transactional_ddl=None,
|
||||
transaction_per_migration=False,
|
||||
output_buffer=None,
|
||||
starting_rev=None,
|
||||
tag=None,
|
||||
template_args=None,
|
||||
render_as_batch=False,
|
||||
target_metadata=None,
|
||||
include_symbol=None,
|
||||
include_object=None,
|
||||
include_schemas=False,
|
||||
process_revision_directives=None,
|
||||
compare_type=False,
|
||||
compare_server_default=False,
|
||||
render_item=None,
|
||||
literal_binds=False,
|
||||
upgrade_token="upgrades",
|
||||
downgrade_token="downgrades",
|
||||
alembic_module_prefix="op.",
|
||||
sqlalchemy_module_prefix="sa.",
|
||||
user_module_prefix=None,
|
||||
on_version_apply=None,
|
||||
**kw
|
||||
):
|
||||
"""Configure a :class:`.MigrationContext` within this
|
||||
:class:`.EnvironmentContext` which will provide database
|
||||
connectivity and other configuration to a series of
|
||||
migration scripts.
|
||||
|
||||
Many methods on :class:`.EnvironmentContext` require that
|
||||
this method has been called in order to function, as they
|
||||
ultimately need to have database access or at least access
|
||||
to the dialect in use. Those which do are documented as such.
|
||||
|
||||
The important thing needed by :meth:`.configure` is a
|
||||
means to determine what kind of database dialect is in use.
|
||||
An actual connection to that database is needed only if
|
||||
the :class:`.MigrationContext` is to be used in
|
||||
"online" mode.
|
||||
|
||||
If the :meth:`.is_offline_mode` function returns ``True``,
|
||||
then no connection is needed here. Otherwise, the
|
||||
``connection`` parameter should be present as an
|
||||
instance of :class:`sqlalchemy.engine.Connection`.
|
||||
|
||||
This function is typically called from the ``env.py``
|
||||
script within a migration environment. It can be called
|
||||
multiple times for an invocation. The most recent
|
||||
:class:`~sqlalchemy.engine.Connection`
|
||||
for which it was called is the one that will be operated upon
|
||||
by the next call to :meth:`.run_migrations`.
|
||||
|
||||
General parameters:
|
||||
|
||||
:param connection: a :class:`~sqlalchemy.engine.Connection`
|
||||
to use
|
||||
for SQL execution in "online" mode. When present, is also
|
||||
used to determine the type of dialect in use.
|
||||
:param url: a string database url, or a
|
||||
:class:`sqlalchemy.engine.url.URL` object.
|
||||
The type of dialect to be used will be derived from this if
|
||||
``connection`` is not passed.
|
||||
:param dialect_name: string name of a dialect, such as
|
||||
"postgresql", "mssql", etc.
|
||||
The type of dialect to be used will be derived from this if
|
||||
``connection`` and ``url`` are not passed.
|
||||
:param transactional_ddl: Force the usage of "transactional"
|
||||
DDL on or off;
|
||||
this otherwise defaults to whether or not the dialect in
|
||||
use supports it.
|
||||
:param transaction_per_migration: if True, nest each migration script
|
||||
in a transaction rather than the full series of migrations to
|
||||
run.
|
||||
|
||||
.. versionadded:: 0.6.5
|
||||
|
||||
:param output_buffer: a file-like object that will be used
|
||||
for textual output
|
||||
when the ``--sql`` option is used to generate SQL scripts.
|
||||
Defaults to
|
||||
``sys.stdout`` if not passed here and also not present on
|
||||
the :class:`.Config`
|
||||
object. The value here overrides that of the :class:`.Config`
|
||||
object.
|
||||
:param output_encoding: when using ``--sql`` to generate SQL
|
||||
scripts, apply this encoding to the string output.
|
||||
:param literal_binds: when using ``--sql`` to generate SQL
|
||||
scripts, pass through the ``literal_binds`` flag to the compiler
|
||||
so that any literal values that would ordinarily be bound
|
||||
parameters are converted to plain strings.
|
||||
|
||||
.. warning:: Dialects can typically only handle simple datatypes
|
||||
like strings and numbers for auto-literal generation. Datatypes
|
||||
like dates, intervals, and others may still require manual
|
||||
formatting, typically using :meth:`.Operations.inline_literal`.
|
||||
|
||||
.. note:: the ``literal_binds`` flag is ignored on SQLAlchemy
|
||||
versions prior to 0.8 where this feature is not supported.
|
||||
|
||||
.. versionadded:: 0.7.6
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.Operations.inline_literal`
|
||||
|
||||
:param starting_rev: Override the "starting revision" argument
|
||||
when using ``--sql`` mode.
|
||||
:param tag: a string tag for usage by custom ``env.py`` scripts.
|
||||
Set via the ``--tag`` option, can be overridden here.
|
||||
:param template_args: dictionary of template arguments which
|
||||
will be added to the template argument environment when
|
||||
running the "revision" command. Note that the script environment
|
||||
is only run within the "revision" command if the --autogenerate
|
||||
option is used, or if the option "revision_environment=true"
|
||||
is present in the alembic.ini file.
|
||||
|
||||
:param version_table: The name of the Alembic version table.
|
||||
The default is ``'alembic_version'``.
|
||||
:param version_table_schema: Optional schema to place version
|
||||
table within.
|
||||
:param version_table_pk: boolean, whether the Alembic version table
|
||||
should use a primary key constraint for the "value" column; this
|
||||
only takes effect when the table is first created.
|
||||
Defaults to True; setting to False should not be necessary and is
|
||||
here for backwards compatibility reasons.
|
||||
|
||||
.. versionadded:: 0.8.10 Added the
|
||||
:paramref:`.EnvironmentContext.configure.version_table_pk`
|
||||
flag and additionally established that the Alembic version table
|
||||
has a primary key constraint by default.
|
||||
|
||||
:param on_version_apply: a callable or collection of callables to be
|
||||
run for each migration step.
|
||||
The callables will be run in the order they are given, once for
|
||||
each migration step, after the respective operation has been
|
||||
applied but before its transaction is finalized.
|
||||
Each callable accepts no positional arguments and the following
|
||||
keyword arguments:
|
||||
|
||||
* ``ctx``: the :class:`.MigrationContext` running the migration,
|
||||
* ``step``: a :class:`.MigrationInfo` representing the
|
||||
step currently being applied,
|
||||
* ``heads``: a collection of version strings representing the
|
||||
current heads,
|
||||
* ``run_args``: the ``**kwargs`` passed to :meth:`.run_migrations`.
|
||||
|
||||
.. versionadded:: 0.9.3
|
||||
|
||||
|
||||
Parameters specific to the autogenerate feature, when
|
||||
``alembic revision`` is run with the ``--autogenerate`` feature:
|
||||
|
||||
:param target_metadata: a :class:`sqlalchemy.schema.MetaData`
|
||||
object, or a sequence of :class:`~sqlalchemy.schema.MetaData`
|
||||
objects, that will be consulted during autogeneration.
|
||||
The tables present in each :class:`~sqlalchemy.schema.MetaData`
|
||||
will be compared against
|
||||
what is locally available on the target
|
||||
:class:`~sqlalchemy.engine.Connection`
|
||||
to produce candidate upgrade/downgrade operations.
|
||||
|
||||
.. versionchanged:: 0.9.0 the
|
||||
:paramref:`.EnvironmentContext.configure.target_metadata`
|
||||
parameter may now be passed a sequence of
|
||||
:class:`~sqlalchemy.schema.MetaData` objects to support
|
||||
autogeneration of multiple :class:`~sqlalchemy.schema.MetaData`
|
||||
collections.
|
||||
|
||||
:param compare_type: Indicates type comparison behavior during
|
||||
an autogenerate
|
||||
operation. Defaults to ``False`` which disables type
|
||||
comparison. Set to
|
||||
``True`` to turn on default type comparison, which has varied
|
||||
accuracy depending on backend. See :ref:`compare_types`
|
||||
for an example as well as information on other type
|
||||
comparison options.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`compare_types`
|
||||
|
||||
:paramref:`.EnvironmentContext.configure.compare_server_default`
|
||||
|
||||
:param compare_server_default: Indicates server default comparison
|
||||
behavior during
|
||||
an autogenerate operation. Defaults to ``False`` which disables
|
||||
server default
|
||||
comparison. Set to ``True`` to turn on server default comparison,
|
||||
which has
|
||||
varied accuracy depending on backend.
|
||||
|
||||
To customize server default comparison behavior, a callable may
|
||||
be specified
|
||||
which can filter server default comparisons during an
|
||||
autogenerate operation.
|
||||
defaults during an autogenerate operation. The format of this
|
||||
callable is::
|
||||
|
||||
def my_compare_server_default(context, inspected_column,
|
||||
metadata_column, inspected_default, metadata_default,
|
||||
rendered_metadata_default):
|
||||
# return True if the defaults are different,
|
||||
# False if not, or None to allow the default implementation
|
||||
# to compare these defaults
|
||||
return None
|
||||
|
||||
context.configure(
|
||||
# ...
|
||||
compare_server_default = my_compare_server_default
|
||||
)
|
||||
|
||||
``inspected_column`` is a dictionary structure as returned by
|
||||
:meth:`sqlalchemy.engine.reflection.Inspector.get_columns`, whereas
|
||||
``metadata_column`` is a :class:`sqlalchemy.schema.Column` from
|
||||
the local model environment.
|
||||
|
||||
A return value of ``None`` indicates to allow default server default
|
||||
comparison
|
||||
to proceed. Note that some backends such as Postgresql actually
|
||||
execute
|
||||
the two defaults on the database side to compare for equivalence.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:paramref:`.EnvironmentContext.configure.compare_type`
|
||||
|
||||
:param include_object: A callable function which is given
|
||||
the chance to return ``True`` or ``False`` for any object,
|
||||
indicating if the given object should be considered in the
|
||||
autogenerate sweep.
|
||||
|
||||
The function accepts the following positional arguments:
|
||||
|
||||
* ``object``: a :class:`~sqlalchemy.schema.SchemaItem` object such
|
||||
as a :class:`~sqlalchemy.schema.Table`,
|
||||
:class:`~sqlalchemy.schema.Column`,
|
||||
:class:`~sqlalchemy.schema.Index`
|
||||
:class:`~sqlalchemy.schema.UniqueConstraint`,
|
||||
or :class:`~sqlalchemy.schema.ForeignKeyConstraint` object
|
||||
* ``name``: the name of the object. This is typically available
|
||||
via ``object.name``.
|
||||
* ``type``: a string describing the type of object; currently
|
||||
``"table"``, ``"column"``, ``"index"``, ``"unique_constraint"``,
|
||||
or ``"foreign_key_constraint"``
|
||||
|
||||
.. versionadded:: 0.7.0 Support for indexes and unique constraints
|
||||
within the
|
||||
:paramref:`~.EnvironmentContext.configure.include_object` hook.
|
||||
|
||||
.. versionadded:: 0.7.1 Support for foreign keys within the
|
||||
:paramref:`~.EnvironmentContext.configure.include_object` hook.
|
||||
|
||||
* ``reflected``: ``True`` if the given object was produced based on
|
||||
table reflection, ``False`` if it's from a local :class:`.MetaData`
|
||||
object.
|
||||
* ``compare_to``: the object being compared against, if available,
|
||||
else ``None``.
|
||||
|
||||
E.g.::
|
||||
|
||||
def include_object(object, name, type_, reflected, compare_to):
|
||||
if (type_ == "column" and
|
||||
not reflected and
|
||||
object.info.get("skip_autogenerate", False)):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
context.configure(
|
||||
# ...
|
||||
include_object = include_object
|
||||
)
|
||||
|
||||
:paramref:`.EnvironmentContext.configure.include_object` can also
|
||||
be used to filter on specific schemas to include or omit, when
|
||||
the :paramref:`.EnvironmentContext.configure.include_schemas`
|
||||
flag is set to ``True``. The :attr:`.Table.schema` attribute
|
||||
on each :class:`.Table` object reflected will indicate the name of the
|
||||
schema from which the :class:`.Table` originates.
|
||||
|
||||
.. versionadded:: 0.6.0
|
||||
|
||||
.. seealso::
|
||||
|
||||
:paramref:`.EnvironmentContext.configure.include_schemas`
|
||||
|
||||
:param include_symbol: A callable function which, given a table name
|
||||
and schema name (may be ``None``), returns ``True`` or ``False``,
|
||||
indicating if the given table should be considered in the
|
||||
autogenerate sweep.
|
||||
|
||||
.. deprecated:: 0.6.0
|
||||
:paramref:`.EnvironmentContext.configure.include_symbol`
|
||||
is superceded by the more generic
|
||||
:paramref:`.EnvironmentContext.configure.include_object`
|
||||
parameter.
|
||||
|
||||
E.g.::
|
||||
|
||||
def include_symbol(tablename, schema):
|
||||
return tablename not in ("skip_table_one", "skip_table_two")
|
||||
|
||||
context.configure(
|
||||
# ...
|
||||
include_symbol = include_symbol
|
||||
)
|
||||
|
||||
.. seealso::
|
||||
|
||||
:paramref:`.EnvironmentContext.configure.include_schemas`
|
||||
|
||||
:paramref:`.EnvironmentContext.configure.include_object`
|
||||
|
||||
:param render_as_batch: if True, commands which alter elements
|
||||
within a table will be placed under a ``with batch_alter_table():``
|
||||
directive, so that batch migrations will take place.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`batch_migrations`
|
||||
|
||||
:param include_schemas: If True, autogenerate will scan across
|
||||
all schemas located by the SQLAlchemy
|
||||
:meth:`~sqlalchemy.engine.reflection.Inspector.get_schema_names`
|
||||
method, and include all differences in tables found across all
|
||||
those schemas. When using this option, you may want to also
|
||||
use the :paramref:`.EnvironmentContext.configure.include_object`
|
||||
option to specify a callable which
|
||||
can filter the tables/schemas that get included.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:paramref:`.EnvironmentContext.configure.include_object`
|
||||
|
||||
:param render_item: Callable that can be used to override how
|
||||
any schema item, i.e. column, constraint, type,
|
||||
etc., is rendered for autogenerate. The callable receives a
|
||||
string describing the type of object, the object, and
|
||||
the autogen context. If it returns False, the
|
||||
default rendering method will be used. If it returns None,
|
||||
the item will not be rendered in the context of a Table
|
||||
construct, that is, can be used to skip columns or constraints
|
||||
within op.create_table()::
|
||||
|
||||
def my_render_column(type_, col, autogen_context):
|
||||
if type_ == "column" and isinstance(col, MySpecialCol):
|
||||
return repr(col)
|
||||
else:
|
||||
return False
|
||||
|
||||
context.configure(
|
||||
# ...
|
||||
render_item = my_render_column
|
||||
)
|
||||
|
||||
Available values for the type string include: ``"column"``,
|
||||
``"primary_key"``, ``"foreign_key"``, ``"unique"``, ``"check"``,
|
||||
``"type"``, ``"server_default"``.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`autogen_render_types`
|
||||
|
||||
:param upgrade_token: When autogenerate completes, the text of the
|
||||
candidate upgrade operations will be present in this template
|
||||
variable when ``script.py.mako`` is rendered. Defaults to
|
||||
``upgrades``.
|
||||
:param downgrade_token: When autogenerate completes, the text of the
|
||||
candidate downgrade operations will be present in this
|
||||
template variable when ``script.py.mako`` is rendered. Defaults to
|
||||
``downgrades``.
|
||||
|
||||
:param alembic_module_prefix: When autogenerate refers to Alembic
|
||||
:mod:`alembic.operations` constructs, this prefix will be used
|
||||
(i.e. ``op.create_table``) Defaults to "``op.``".
|
||||
Can be ``None`` to indicate no prefix.
|
||||
|
||||
:param sqlalchemy_module_prefix: When autogenerate refers to
|
||||
SQLAlchemy
|
||||
:class:`~sqlalchemy.schema.Column` or type classes, this prefix
|
||||
will be used
|
||||
(i.e. ``sa.Column("somename", sa.Integer)``) Defaults to "``sa.``".
|
||||
Can be ``None`` to indicate no prefix.
|
||||
Note that when dialect-specific types are rendered, autogenerate
|
||||
will render them using the dialect module name, i.e. ``mssql.BIT()``,
|
||||
``postgresql.UUID()``.
|
||||
|
||||
:param user_module_prefix: When autogenerate refers to a SQLAlchemy
|
||||
type (e.g. :class:`.TypeEngine`) where the module name is not
|
||||
under the ``sqlalchemy`` namespace, this prefix will be used
|
||||
within autogenerate. If left at its default of
|
||||
``None``, the ``__module__`` attribute of the type is used to
|
||||
render the import module. It's a good practice to set this
|
||||
and to have all custom types be available from a fixed module space,
|
||||
in order to future-proof migration files against reorganizations
|
||||
in modules.
|
||||
|
||||
.. versionchanged:: 0.7.0
|
||||
:paramref:`.EnvironmentContext.configure.user_module_prefix`
|
||||
no longer defaults to the value of
|
||||
:paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`
|
||||
when left at ``None``; the ``__module__`` attribute is now used.
|
||||
|
||||
.. versionadded:: 0.6.3 added
|
||||
:paramref:`.EnvironmentContext.configure.user_module_prefix`
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`autogen_module_prefix`
|
||||
|
||||
:param process_revision_directives: a callable function that will
|
||||
be passed a structure representing the end result of an autogenerate
|
||||
or plain "revision" operation, which can be manipulated to affect
|
||||
how the ``alembic revision`` command ultimately outputs new
|
||||
revision scripts. The structure of the callable is::
|
||||
|
||||
def process_revision_directives(context, revision, directives):
|
||||
pass
|
||||
|
||||
The ``directives`` parameter is a Python list containing
|
||||
a single :class:`.MigrationScript` directive, which represents
|
||||
the revision file to be generated. This list as well as its
|
||||
contents may be freely modified to produce any set of commands.
|
||||
The section :ref:`customizing_revision` shows an example of
|
||||
doing this. The ``context`` parameter is the
|
||||
:class:`.MigrationContext` in use,
|
||||
and ``revision`` is a tuple of revision identifiers representing the
|
||||
current revision of the database.
|
||||
|
||||
The callable is invoked at all times when the ``--autogenerate``
|
||||
option is passed to ``alembic revision``. If ``--autogenerate``
|
||||
is not passed, the callable is invoked only if the
|
||||
``revision_environment`` variable is set to True in the Alembic
|
||||
configuration, in which case the given ``directives`` collection
|
||||
will contain empty :class:`.UpgradeOps` and :class:`.DowngradeOps`
|
||||
collections for ``.upgrade_ops`` and ``.downgrade_ops``. The
|
||||
``--autogenerate`` option itself can be inferred by inspecting
|
||||
``context.config.cmd_opts.autogenerate``.
|
||||
|
||||
The callable function may optionally be an instance of
|
||||
a :class:`.Rewriter` object. This is a helper object that
|
||||
assists in the production of autogenerate-stream rewriter functions.
|
||||
|
||||
|
||||
.. versionadded:: 0.8.0
|
||||
|
||||
.. versionchanged:: 0.8.1 - The
|
||||
:paramref:`.EnvironmentContext.configure.process_revision_directives`
|
||||
hook can append op directives into :class:`.UpgradeOps` and
|
||||
:class:`.DowngradeOps` which will be rendered in Python regardless
|
||||
of whether the ``--autogenerate`` option is in use or not;
|
||||
the ``revision_environment`` configuration variable should be
|
||||
set to "true" in the config to enable this.
|
||||
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`customizing_revision`
|
||||
|
||||
:ref:`autogen_rewriter`
|
||||
|
||||
:paramref:`.command.revision.process_revision_directives`
|
||||
|
||||
Parameters specific to individual backends:
|
||||
|
||||
:param mssql_batch_separator: The "batch separator" which will
|
||||
be placed between each statement when generating offline SQL Server
|
||||
migrations. Defaults to ``GO``. Note this is in addition to the
|
||||
customary semicolon ``;`` at the end of each statement; SQL Server
|
||||
considers the "batch separator" to denote the end of an
|
||||
individual statement execution, and cannot group certain
|
||||
dependent operations in one step.
|
||||
:param oracle_batch_separator: The "batch separator" which will
|
||||
be placed between each statement when generating offline
|
||||
Oracle migrations. Defaults to ``/``. Oracle doesn't add a
|
||||
semicolon between statements like most other backends.
|
||||
|
||||
"""
|
||||
opts = self.context_opts
|
||||
if transactional_ddl is not None:
|
||||
opts["transactional_ddl"] = transactional_ddl
|
||||
if output_buffer is not None:
|
||||
opts["output_buffer"] = output_buffer
|
||||
elif self.config.output_buffer is not None:
|
||||
opts["output_buffer"] = self.config.output_buffer
|
||||
if starting_rev:
|
||||
opts['starting_rev'] = starting_rev
|
||||
if tag:
|
||||
opts['tag'] = tag
|
||||
if template_args and 'template_args' in opts:
|
||||
opts['template_args'].update(template_args)
|
||||
opts["transaction_per_migration"] = transaction_per_migration
|
||||
opts['target_metadata'] = target_metadata
|
||||
opts['include_symbol'] = include_symbol
|
||||
opts['include_object'] = include_object
|
||||
opts['include_schemas'] = include_schemas
|
||||
opts['render_as_batch'] = render_as_batch
|
||||
opts['upgrade_token'] = upgrade_token
|
||||
opts['downgrade_token'] = downgrade_token
|
||||
opts['sqlalchemy_module_prefix'] = sqlalchemy_module_prefix
|
||||
opts['alembic_module_prefix'] = alembic_module_prefix
|
||||
opts['user_module_prefix'] = user_module_prefix
|
||||
opts['literal_binds'] = literal_binds
|
||||
opts['process_revision_directives'] = process_revision_directives
|
||||
opts['on_version_apply'] = util.to_tuple(on_version_apply, default=())
|
||||
|
||||
if render_item is not None:
|
||||
opts['render_item'] = render_item
|
||||
if compare_type is not None:
|
||||
opts['compare_type'] = compare_type
|
||||
if compare_server_default is not None:
|
||||
opts['compare_server_default'] = compare_server_default
|
||||
opts['script'] = self.script
|
||||
|
||||
opts.update(kw)
|
||||
|
||||
self._migration_context = MigrationContext.configure(
|
||||
connection=connection,
|
||||
url=url,
|
||||
dialect_name=dialect_name,
|
||||
environment_context=self,
|
||||
opts=opts
|
||||
)
|
||||
|
||||
def run_migrations(self, **kw):
|
||||
"""Run migrations as determined by the current command line
|
||||
configuration
|
||||
as well as versioning information present (or not) in the current
|
||||
database connection (if one is present).
|
||||
|
||||
The function accepts optional ``**kw`` arguments. If these are
|
||||
passed, they are sent directly to the ``upgrade()`` and
|
||||
``downgrade()``
|
||||
functions within each target revision file. By modifying the
|
||||
``script.py.mako`` file so that the ``upgrade()`` and ``downgrade()``
|
||||
functions accept arguments, parameters can be passed here so that
|
||||
contextual information, usually information to identify a particular
|
||||
database in use, can be passed from a custom ``env.py`` script
|
||||
to the migration functions.
|
||||
|
||||
This function requires that a :class:`.MigrationContext` has
|
||||
first been made available via :meth:`.configure`.
|
||||
|
||||
"""
|
||||
with Operations.context(self._migration_context):
|
||||
self.get_context().run_migrations(**kw)
|
||||
|
||||
def execute(self, sql, execution_options=None):
|
||||
"""Execute the given SQL using the current change context.
|
||||
|
||||
The behavior of :meth:`.execute` is the same
|
||||
as that of :meth:`.Operations.execute`. Please see that
|
||||
function's documentation for full detail including
|
||||
caveats and limitations.
|
||||
|
||||
This function requires that a :class:`.MigrationContext` has
|
||||
first been made available via :meth:`.configure`.
|
||||
|
||||
"""
|
||||
self.get_context().execute(sql,
|
||||
execution_options=execution_options)
|
||||
|
||||
def static_output(self, text):
|
||||
"""Emit text directly to the "offline" SQL stream.
|
||||
|
||||
Typically this is for emitting comments that
|
||||
start with --. The statement is not treated
|
||||
as a SQL execution, no ; or batch separator
|
||||
is added, etc.
|
||||
|
||||
"""
|
||||
self.get_context().impl.static_output(text)
|
||||
|
||||
def begin_transaction(self):
|
||||
"""Return a context manager that will
|
||||
enclose an operation within a "transaction",
|
||||
as defined by the environment's offline
|
||||
and transactional DDL settings.
|
||||
|
||||
e.g.::
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
:meth:`.begin_transaction` is intended to
|
||||
"do the right thing" regardless of
|
||||
calling context:
|
||||
|
||||
* If :meth:`.is_transactional_ddl` is ``False``,
|
||||
returns a "do nothing" context manager
|
||||
which otherwise produces no transactional
|
||||
state or directives.
|
||||
* If :meth:`.is_offline_mode` is ``True``,
|
||||
returns a context manager that will
|
||||
invoke the :meth:`.DefaultImpl.emit_begin`
|
||||
and :meth:`.DefaultImpl.emit_commit`
|
||||
methods, which will produce the string
|
||||
directives ``BEGIN`` and ``COMMIT`` on
|
||||
the output stream, as rendered by the
|
||||
target backend (e.g. SQL Server would
|
||||
emit ``BEGIN TRANSACTION``).
|
||||
* Otherwise, calls :meth:`sqlalchemy.engine.Connection.begin`
|
||||
on the current online connection, which
|
||||
returns a :class:`sqlalchemy.engine.Transaction`
|
||||
object. This object demarcates a real
|
||||
transaction and is itself a context manager,
|
||||
which will roll back if an exception
|
||||
is raised.
|
||||
|
||||
Note that a custom ``env.py`` script which
|
||||
has more specific transactional needs can of course
|
||||
manipulate the :class:`~sqlalchemy.engine.Connection`
|
||||
directly to produce transactional state in "online"
|
||||
mode.
|
||||
|
||||
"""
|
||||
|
||||
return self.get_context().begin_transaction()
|
||||
|
||||
def get_context(self):
|
||||
"""Return the current :class:`.MigrationContext` object.
|
||||
|
||||
If :meth:`.EnvironmentContext.configure` has not been
|
||||
called yet, raises an exception.
|
||||
|
||||
"""
|
||||
|
||||
if self._migration_context is None:
|
||||
raise Exception("No context has been configured yet.")
|
||||
return self._migration_context
|
||||
|
||||
def get_bind(self):
|
||||
"""Return the current 'bind'.
|
||||
|
||||
In "online" mode, this is the
|
||||
:class:`sqlalchemy.engine.Connection` currently being used
|
||||
to emit SQL to the database.
|
||||
|
||||
This function requires that a :class:`.MigrationContext`
|
||||
has first been made available via :meth:`.configure`.
|
||||
|
||||
"""
|
||||
return self.get_context().bind
|
||||
|
||||
def get_impl(self):
|
||||
return self.get_context().impl
|
|
@ -1,989 +0,0 @@
|
|||
import logging
|
||||
import sys
|
||||
from contextlib import contextmanager
|
||||
|
||||
from sqlalchemy import MetaData, Table, Column, String, literal_column,\
|
||||
PrimaryKeyConstraint
|
||||
from sqlalchemy.engine.strategies import MockEngineStrategy
|
||||
from sqlalchemy.engine import url as sqla_url
|
||||
from sqlalchemy.engine import Connection
|
||||
|
||||
from ..util.compat import callable, EncodedIO
|
||||
from .. import ddl, util
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class MigrationContext(object):
|
||||
|
||||
"""Represent the database state made available to a migration
|
||||
script.
|
||||
|
||||
:class:`.MigrationContext` is the front end to an actual
|
||||
database connection, or alternatively a string output
|
||||
stream given a particular database dialect,
|
||||
from an Alembic perspective.
|
||||
|
||||
When inside the ``env.py`` script, the :class:`.MigrationContext`
|
||||
is available via the
|
||||
:meth:`.EnvironmentContext.get_context` method,
|
||||
which is available at ``alembic.context``::
|
||||
|
||||
# from within env.py script
|
||||
from alembic import context
|
||||
migration_context = context.get_context()
|
||||
|
||||
For usage outside of an ``env.py`` script, such as for
|
||||
utility routines that want to check the current version
|
||||
in the database, the :meth:`.MigrationContext.configure`
|
||||
method to create new :class:`.MigrationContext` objects.
|
||||
For example, to get at the current revision in the
|
||||
database using :meth:`.MigrationContext.get_current_revision`::
|
||||
|
||||
# in any application, outside of an env.py script
|
||||
from alembic.migration import MigrationContext
|
||||
from sqlalchemy import create_engine
|
||||
|
||||
engine = create_engine("postgresql://mydatabase")
|
||||
conn = engine.connect()
|
||||
|
||||
context = MigrationContext.configure(conn)
|
||||
current_rev = context.get_current_revision()
|
||||
|
||||
The above context can also be used to produce
|
||||
Alembic migration operations with an :class:`.Operations`
|
||||
instance::
|
||||
|
||||
# in any application, outside of the normal Alembic environment
|
||||
from alembic.operations import Operations
|
||||
op = Operations(context)
|
||||
op.alter_column("mytable", "somecolumn", nullable=True)
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, dialect, connection, opts, environment_context=None):
|
||||
self.environment_context = environment_context
|
||||
self.opts = opts
|
||||
self.dialect = dialect
|
||||
self.script = opts.get('script')
|
||||
as_sql = opts.get('as_sql', False)
|
||||
transactional_ddl = opts.get("transactional_ddl")
|
||||
self._transaction_per_migration = opts.get(
|
||||
"transaction_per_migration", False)
|
||||
self.on_version_apply_callbacks = opts.get('on_version_apply', ())
|
||||
|
||||
if as_sql:
|
||||
self.connection = self._stdout_connection(connection)
|
||||
assert self.connection is not None
|
||||
else:
|
||||
self.connection = connection
|
||||
self._migrations_fn = opts.get('fn')
|
||||
self.as_sql = as_sql
|
||||
|
||||
if "output_encoding" in opts:
|
||||
self.output_buffer = EncodedIO(
|
||||
opts.get("output_buffer") or sys.stdout,
|
||||
opts['output_encoding']
|
||||
)
|
||||
else:
|
||||
self.output_buffer = opts.get("output_buffer", sys.stdout)
|
||||
|
||||
self._user_compare_type = opts.get('compare_type', False)
|
||||
self._user_compare_server_default = opts.get(
|
||||
'compare_server_default',
|
||||
False)
|
||||
self.version_table = version_table = opts.get(
|
||||
'version_table', 'alembic_version')
|
||||
self.version_table_schema = version_table_schema = \
|
||||
opts.get('version_table_schema', None)
|
||||
self._version = Table(
|
||||
version_table, MetaData(),
|
||||
Column('version_num', String(32), nullable=False),
|
||||
schema=version_table_schema)
|
||||
if opts.get("version_table_pk", True):
|
||||
self._version.append_constraint(
|
||||
PrimaryKeyConstraint(
|
||||
'version_num', name="%s_pkc" % version_table
|
||||
)
|
||||
)
|
||||
|
||||
self._start_from_rev = opts.get("starting_rev")
|
||||
self.impl = ddl.DefaultImpl.get_by_dialect(dialect)(
|
||||
dialect, self.connection, self.as_sql,
|
||||
transactional_ddl,
|
||||
self.output_buffer,
|
||||
opts
|
||||
)
|
||||
log.info("Context impl %s.", self.impl.__class__.__name__)
|
||||
if self.as_sql:
|
||||
log.info("Generating static SQL")
|
||||
log.info("Will assume %s DDL.",
|
||||
"transactional" if self.impl.transactional_ddl
|
||||
else "non-transactional")
|
||||
|
||||
@classmethod
|
||||
def configure(cls,
|
||||
connection=None,
|
||||
url=None,
|
||||
dialect_name=None,
|
||||
dialect=None,
|
||||
environment_context=None,
|
||||
opts=None,
|
||||
):
|
||||
"""Create a new :class:`.MigrationContext`.
|
||||
|
||||
This is a factory method usually called
|
||||
by :meth:`.EnvironmentContext.configure`.
|
||||
|
||||
:param connection: a :class:`~sqlalchemy.engine.Connection`
|
||||
to use for SQL execution in "online" mode. When present,
|
||||
is also used to determine the type of dialect in use.
|
||||
:param url: a string database url, or a
|
||||
:class:`sqlalchemy.engine.url.URL` object.
|
||||
The type of dialect to be used will be derived from this if
|
||||
``connection`` is not passed.
|
||||
:param dialect_name: string name of a dialect, such as
|
||||
"postgresql", "mssql", etc. The type of dialect to be used will be
|
||||
derived from this if ``connection`` and ``url`` are not passed.
|
||||
:param opts: dictionary of options. Most other options
|
||||
accepted by :meth:`.EnvironmentContext.configure` are passed via
|
||||
this dictionary.
|
||||
|
||||
"""
|
||||
if opts is None:
|
||||
opts = {}
|
||||
|
||||
if connection:
|
||||
if not isinstance(connection, Connection):
|
||||
util.warn(
|
||||
"'connection' argument to configure() is expected "
|
||||
"to be a sqlalchemy.engine.Connection instance, "
|
||||
"got %r" % connection)
|
||||
dialect = connection.dialect
|
||||
elif url:
|
||||
url = sqla_url.make_url(url)
|
||||
dialect = url.get_dialect()()
|
||||
elif dialect_name:
|
||||
url = sqla_url.make_url("%s://" % dialect_name)
|
||||
dialect = url.get_dialect()()
|
||||
elif not dialect:
|
||||
raise Exception("Connection, url, or dialect_name is required.")
|
||||
|
||||
return MigrationContext(dialect, connection, opts, environment_context)
|
||||
|
||||
def begin_transaction(self, _per_migration=False):
|
||||
transaction_now = _per_migration == self._transaction_per_migration
|
||||
|
||||
if not transaction_now:
|
||||
@contextmanager
|
||||
def do_nothing():
|
||||
yield
|
||||
return do_nothing()
|
||||
|
||||
elif not self.impl.transactional_ddl:
|
||||
@contextmanager
|
||||
def do_nothing():
|
||||
yield
|
||||
return do_nothing()
|
||||
elif self.as_sql:
|
||||
@contextmanager
|
||||
def begin_commit():
|
||||
self.impl.emit_begin()
|
||||
yield
|
||||
self.impl.emit_commit()
|
||||
return begin_commit()
|
||||
else:
|
||||
return self.bind.begin()
|
||||
|
||||
def get_current_revision(self):
|
||||
"""Return the current revision, usually that which is present
|
||||
in the ``alembic_version`` table in the database.
|
||||
|
||||
This method intends to be used only for a migration stream that
|
||||
does not contain unmerged branches in the target database;
|
||||
if there are multiple branches present, an exception is raised.
|
||||
The :meth:`.MigrationContext.get_current_heads` should be preferred
|
||||
over this method going forward in order to be compatible with
|
||||
branch migration support.
|
||||
|
||||
If this :class:`.MigrationContext` was configured in "offline"
|
||||
mode, that is with ``as_sql=True``, the ``starting_rev``
|
||||
parameter is returned instead, if any.
|
||||
|
||||
"""
|
||||
heads = self.get_current_heads()
|
||||
if len(heads) == 0:
|
||||
return None
|
||||
elif len(heads) > 1:
|
||||
raise util.CommandError(
|
||||
"Version table '%s' has more than one head present; "
|
||||
"please use get_current_heads()" % self.version_table)
|
||||
else:
|
||||
return heads[0]
|
||||
|
||||
def get_current_heads(self):
|
||||
"""Return a tuple of the current 'head versions' that are represented
|
||||
in the target database.
|
||||
|
||||
For a migration stream without branches, this will be a single
|
||||
value, synonymous with that of
|
||||
:meth:`.MigrationContext.get_current_revision`. However when multiple
|
||||
unmerged branches exist within the target database, the returned tuple
|
||||
will contain a value for each head.
|
||||
|
||||
If this :class:`.MigrationContext` was configured in "offline"
|
||||
mode, that is with ``as_sql=True``, the ``starting_rev``
|
||||
parameter is returned in a one-length tuple.
|
||||
|
||||
If no version table is present, or if there are no revisions
|
||||
present, an empty tuple is returned.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
"""
|
||||
if self.as_sql:
|
||||
start_from_rev = self._start_from_rev
|
||||
if start_from_rev == 'base':
|
||||
start_from_rev = None
|
||||
elif start_from_rev is not None and self.script:
|
||||
start_from_rev = \
|
||||
self.script.get_revision(start_from_rev).revision
|
||||
|
||||
return util.to_tuple(start_from_rev, default=())
|
||||
else:
|
||||
if self._start_from_rev:
|
||||
raise util.CommandError(
|
||||
"Can't specify current_rev to context "
|
||||
"when using a database connection")
|
||||
if not self._has_version_table():
|
||||
return ()
|
||||
return tuple(
|
||||
row[0] for row in self.connection.execute(self._version.select())
|
||||
)
|
||||
|
||||
def _ensure_version_table(self):
|
||||
self._version.create(self.connection, checkfirst=True)
|
||||
|
||||
def _has_version_table(self):
|
||||
return self.connection.dialect.has_table(
|
||||
self.connection, self.version_table, self.version_table_schema)
|
||||
|
||||
def stamp(self, script_directory, revision):
|
||||
"""Stamp the version table with a specific revision.
|
||||
|
||||
This method calculates those branches to which the given revision
|
||||
can apply, and updates those branches as though they were migrated
|
||||
towards that revision (either up or down). If no current branches
|
||||
include the revision, it is added as a new branch head.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
"""
|
||||
heads = self.get_current_heads()
|
||||
if not self.as_sql and not heads:
|
||||
self._ensure_version_table()
|
||||
head_maintainer = HeadMaintainer(self, heads)
|
||||
for step in script_directory._stamp_revs(revision, heads):
|
||||
head_maintainer.update_to_step(step)
|
||||
|
||||
def run_migrations(self, **kw):
|
||||
"""Run the migration scripts established for this
|
||||
:class:`.MigrationContext`, if any.
|
||||
|
||||
The commands in :mod:`alembic.command` will set up a function
|
||||
that is ultimately passed to the :class:`.MigrationContext`
|
||||
as the ``fn`` argument. This function represents the "work"
|
||||
that will be done when :meth:`.MigrationContext.run_migrations`
|
||||
is called, typically from within the ``env.py`` script of the
|
||||
migration environment. The "work function" then provides an iterable
|
||||
of version callables and other version information which
|
||||
in the case of the ``upgrade`` or ``downgrade`` commands are the
|
||||
list of version scripts to invoke. Other commands yield nothing,
|
||||
in the case that a command wants to run some other operation
|
||||
against the database such as the ``current`` or ``stamp`` commands.
|
||||
|
||||
:param \**kw: keyword arguments here will be passed to each
|
||||
migration callable, that is the ``upgrade()`` or ``downgrade()``
|
||||
method within revision scripts.
|
||||
|
||||
"""
|
||||
self.impl.start_migrations()
|
||||
|
||||
heads = self.get_current_heads()
|
||||
if not self.as_sql and not heads:
|
||||
self._ensure_version_table()
|
||||
|
||||
head_maintainer = HeadMaintainer(self, heads)
|
||||
|
||||
starting_in_transaction = not self.as_sql and \
|
||||
self._in_connection_transaction()
|
||||
|
||||
for step in self._migrations_fn(heads, self):
|
||||
with self.begin_transaction(_per_migration=True):
|
||||
if self.as_sql and not head_maintainer.heads:
|
||||
# for offline mode, include a CREATE TABLE from
|
||||
# the base
|
||||
self._version.create(self.connection)
|
||||
log.info("Running %s", step)
|
||||
if self.as_sql:
|
||||
self.impl.static_output("-- Running %s" % (step.short_log,))
|
||||
step.migration_fn(**kw)
|
||||
|
||||
# previously, we wouldn't stamp per migration
|
||||
# if we were in a transaction, however given the more
|
||||
# complex model that involves any number of inserts
|
||||
# and row-targeted updates and deletes, it's simpler for now
|
||||
# just to run the operations on every version
|
||||
head_maintainer.update_to_step(step)
|
||||
for callback in self.on_version_apply_callbacks:
|
||||
callback(ctx=self,
|
||||
step=step.info,
|
||||
heads=set(head_maintainer.heads),
|
||||
run_args=kw)
|
||||
|
||||
if not starting_in_transaction and not self.as_sql and \
|
||||
not self.impl.transactional_ddl and \
|
||||
self._in_connection_transaction():
|
||||
raise util.CommandError(
|
||||
"Migration \"%s\" has left an uncommitted "
|
||||
"transaction opened; transactional_ddl is False so "
|
||||
"Alembic is not committing transactions"
|
||||
% step)
|
||||
|
||||
if self.as_sql and not head_maintainer.heads:
|
||||
self._version.drop(self.connection)
|
||||
|
||||
def _in_connection_transaction(self):
|
||||
try:
|
||||
meth = self.connection.in_transaction
|
||||
except AttributeError:
|
||||
return False
|
||||
else:
|
||||
return meth()
|
||||
|
||||
def execute(self, sql, execution_options=None):
|
||||
"""Execute a SQL construct or string statement.
|
||||
|
||||
The underlying execution mechanics are used, that is
|
||||
if this is "offline mode" the SQL is written to the
|
||||
output buffer, otherwise the SQL is emitted on
|
||||
the current SQLAlchemy connection.
|
||||
|
||||
"""
|
||||
self.impl._exec(sql, execution_options)
|
||||
|
||||
def _stdout_connection(self, connection):
|
||||
def dump(construct, *multiparams, **params):
|
||||
self.impl._exec(construct)
|
||||
|
||||
return MockEngineStrategy.MockConnection(self.dialect, dump)
|
||||
|
||||
@property
|
||||
def bind(self):
|
||||
"""Return the current "bind".
|
||||
|
||||
In online mode, this is an instance of
|
||||
:class:`sqlalchemy.engine.Connection`, and is suitable
|
||||
for ad-hoc execution of any kind of usage described
|
||||
in :ref:`sqlexpression_toplevel` as well as
|
||||
for usage with the :meth:`sqlalchemy.schema.Table.create`
|
||||
and :meth:`sqlalchemy.schema.MetaData.create_all` methods
|
||||
of :class:`~sqlalchemy.schema.Table`,
|
||||
:class:`~sqlalchemy.schema.MetaData`.
|
||||
|
||||
Note that when "standard output" mode is enabled,
|
||||
this bind will be a "mock" connection handler that cannot
|
||||
return results and is only appropriate for a very limited
|
||||
subset of commands.
|
||||
|
||||
"""
|
||||
return self.connection
|
||||
|
||||
@property
|
||||
def config(self):
|
||||
"""Return the :class:`.Config` used by the current environment, if any.
|
||||
|
||||
.. versionadded:: 0.6.6
|
||||
|
||||
"""
|
||||
if self.environment_context:
|
||||
return self.environment_context.config
|
||||
else:
|
||||
return None
|
||||
|
||||
def _compare_type(self, inspector_column, metadata_column):
|
||||
if self._user_compare_type is False:
|
||||
return False
|
||||
|
||||
if callable(self._user_compare_type):
|
||||
user_value = self._user_compare_type(
|
||||
self,
|
||||
inspector_column,
|
||||
metadata_column,
|
||||
inspector_column.type,
|
||||
metadata_column.type
|
||||
)
|
||||
if user_value is not None:
|
||||
return user_value
|
||||
|
||||
return self.impl.compare_type(
|
||||
inspector_column,
|
||||
metadata_column)
|
||||
|
||||
def _compare_server_default(self, inspector_column,
|
||||
metadata_column,
|
||||
rendered_metadata_default,
|
||||
rendered_column_default):
|
||||
|
||||
if self._user_compare_server_default is False:
|
||||
return False
|
||||
|
||||
if callable(self._user_compare_server_default):
|
||||
user_value = self._user_compare_server_default(
|
||||
self,
|
||||
inspector_column,
|
||||
metadata_column,
|
||||
rendered_column_default,
|
||||
metadata_column.server_default,
|
||||
rendered_metadata_default
|
||||
)
|
||||
if user_value is not None:
|
||||
return user_value
|
||||
|
||||
return self.impl.compare_server_default(
|
||||
inspector_column,
|
||||
metadata_column,
|
||||
rendered_metadata_default,
|
||||
rendered_column_default)
|
||||
|
||||
|
||||
class HeadMaintainer(object):
|
||||
def __init__(self, context, heads):
|
||||
self.context = context
|
||||
self.heads = set(heads)
|
||||
|
||||
def _insert_version(self, version):
|
||||
assert version not in self.heads
|
||||
self.heads.add(version)
|
||||
|
||||
self.context.impl._exec(
|
||||
self.context._version.insert().
|
||||
values(
|
||||
version_num=literal_column("'%s'" % version)
|
||||
)
|
||||
)
|
||||
|
||||
def _delete_version(self, version):
|
||||
self.heads.remove(version)
|
||||
|
||||
ret = self.context.impl._exec(
|
||||
self.context._version.delete().where(
|
||||
self.context._version.c.version_num ==
|
||||
literal_column("'%s'" % version)))
|
||||
if not self.context.as_sql and ret.rowcount != 1:
|
||||
raise util.CommandError(
|
||||
"Online migration expected to match one "
|
||||
"row when deleting '%s' in '%s'; "
|
||||
"%d found"
|
||||
% (version,
|
||||
self.context.version_table, ret.rowcount))
|
||||
|
||||
def _update_version(self, from_, to_):
|
||||
assert to_ not in self.heads
|
||||
self.heads.remove(from_)
|
||||
self.heads.add(to_)
|
||||
|
||||
ret = self.context.impl._exec(
|
||||
self.context._version.update().
|
||||
values(version_num=literal_column("'%s'" % to_)).where(
|
||||
self.context._version.c.version_num
|
||||
== literal_column("'%s'" % from_))
|
||||
)
|
||||
if not self.context.as_sql and ret.rowcount != 1:
|
||||
raise util.CommandError(
|
||||
"Online migration expected to match one "
|
||||
"row when updating '%s' to '%s' in '%s'; "
|
||||
"%d found"
|
||||
% (from_, to_, self.context.version_table, ret.rowcount))
|
||||
|
||||
def update_to_step(self, step):
|
||||
if step.should_delete_branch(self.heads):
|
||||
vers = step.delete_version_num
|
||||
log.debug("branch delete %s", vers)
|
||||
self._delete_version(vers)
|
||||
elif step.should_create_branch(self.heads):
|
||||
vers = step.insert_version_num
|
||||
log.debug("new branch insert %s", vers)
|
||||
self._insert_version(vers)
|
||||
elif step.should_merge_branches(self.heads):
|
||||
# delete revs, update from rev, update to rev
|
||||
(delete_revs, update_from_rev,
|
||||
update_to_rev) = step.merge_branch_idents(self.heads)
|
||||
log.debug(
|
||||
"merge, delete %s, update %s to %s",
|
||||
delete_revs, update_from_rev, update_to_rev)
|
||||
for delrev in delete_revs:
|
||||
self._delete_version(delrev)
|
||||
self._update_version(update_from_rev, update_to_rev)
|
||||
elif step.should_unmerge_branches(self.heads):
|
||||
(update_from_rev, update_to_rev,
|
||||
insert_revs) = step.unmerge_branch_idents(self.heads)
|
||||
log.debug(
|
||||
"unmerge, insert %s, update %s to %s",
|
||||
insert_revs, update_from_rev, update_to_rev)
|
||||
for insrev in insert_revs:
|
||||
self._insert_version(insrev)
|
||||
self._update_version(update_from_rev, update_to_rev)
|
||||
else:
|
||||
from_, to_ = step.update_version_num(self.heads)
|
||||
log.debug("update %s to %s", from_, to_)
|
||||
self._update_version(from_, to_)
|
||||
|
||||
|
||||
class MigrationInfo(object):
|
||||
"""Exposes information about a migration step to a callback listener.
|
||||
|
||||
The :class:`.MigrationInfo` object is available exclusively for the
|
||||
benefit of the :paramref:`.EnvironmentContext.on_version_apply`
|
||||
callback hook.
|
||||
|
||||
.. versionadded:: 0.9.3
|
||||
|
||||
"""
|
||||
|
||||
is_upgrade = None
|
||||
"""True/False: indicates whether this operation ascends or descends the
|
||||
version tree."""
|
||||
|
||||
is_stamp = None
|
||||
"""True/False: indicates whether this operation is a stamp (i.e. whether
|
||||
it results in any actual database operations)."""
|
||||
|
||||
up_revision_id = None
|
||||
"""Version string corresponding to :attr:`.Revision.revision`.
|
||||
|
||||
In the case of a stamp operation, it is advised to use the
|
||||
:attr:`.MigrationInfo.up_revision_ids` tuple as a stamp operation can
|
||||
make a single movement from one or more branches down to a single
|
||||
branchpoint, in which case there will be multiple "up" revisions.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:attr:`.MigrationInfo.up_revision_ids`
|
||||
|
||||
"""
|
||||
|
||||
up_revision_ids = None
|
||||
"""Tuple of version strings corresponding to :attr:`.Revision.revision`.
|
||||
|
||||
In the majority of cases, this tuple will be a single value, synonomous
|
||||
with the scalar value of :attr:`.MigrationInfo.up_revision_id`.
|
||||
It can be multiple revision identifiers only in the case of an
|
||||
``alembic stamp`` operation which is moving downwards from multiple
|
||||
branches down to their common branch point.
|
||||
|
||||
.. versionadded:: 0.9.4
|
||||
|
||||
"""
|
||||
|
||||
down_revision_ids = None
|
||||
"""Tuple of strings representing the base revisions of this migration step.
|
||||
|
||||
If empty, this represents a root revision; otherwise, the first item
|
||||
corresponds to :attr:`.Revision.down_revision`, and the rest are inferred
|
||||
from dependencies.
|
||||
"""
|
||||
|
||||
revision_map = None
|
||||
"""The revision map inside of which this operation occurs."""
|
||||
|
||||
def __init__(self, revision_map, is_upgrade, is_stamp, up_revisions,
|
||||
down_revisions):
|
||||
self.revision_map = revision_map
|
||||
self.is_upgrade = is_upgrade
|
||||
self.is_stamp = is_stamp
|
||||
self.up_revision_ids = util.to_tuple(up_revisions, default=())
|
||||
if self.up_revision_ids:
|
||||
self.up_revision_id = self.up_revision_ids[0]
|
||||
else:
|
||||
# this should never be the case with
|
||||
# "upgrade", "downgrade", or "stamp" as we are always
|
||||
# measuring movement in terms of at least one upgrade version
|
||||
self.up_revision_id = None
|
||||
self.down_revision_ids = util.to_tuple(down_revisions, default=())
|
||||
|
||||
@property
|
||||
def is_migration(self):
|
||||
"""True/False: indicates whether this operation is a migration.
|
||||
|
||||
At present this is true if and only the migration is not a stamp.
|
||||
If other operation types are added in the future, both this attribute
|
||||
and :attr:`~.MigrationInfo.is_stamp` will be false.
|
||||
"""
|
||||
return not self.is_stamp
|
||||
|
||||
@property
|
||||
def source_revision_ids(self):
|
||||
"""Active revisions before this migration step is applied."""
|
||||
return self.down_revision_ids if self.is_upgrade \
|
||||
else self.up_revision_ids
|
||||
|
||||
@property
|
||||
def destination_revision_ids(self):
|
||||
"""Active revisions after this migration step is applied."""
|
||||
return self.up_revision_ids if self.is_upgrade \
|
||||
else self.down_revision_ids
|
||||
|
||||
@property
|
||||
def up_revision(self):
|
||||
"""Get :attr:`~.MigrationInfo.up_revision_id` as a :class:`.Revision`."""
|
||||
return self.revision_map.get_revision(self.up_revision_id)
|
||||
|
||||
@property
|
||||
def up_revisions(self):
|
||||
"""Get :attr:`~.MigrationInfo.up_revision_ids` as a :class:`.Revision`.
|
||||
|
||||
.. versionadded:: 0.9.4
|
||||
|
||||
"""
|
||||
return self.revision_map.get_revisions(self.up_revision_ids)
|
||||
|
||||
@property
|
||||
def down_revisions(self):
|
||||
"""Get :attr:`~.MigrationInfo.down_revision_ids` as a tuple of
|
||||
:class:`Revisions <.Revision>`."""
|
||||
return self.revision_map.get_revisions(self.down_revision_ids)
|
||||
|
||||
@property
|
||||
def source_revisions(self):
|
||||
"""Get :attr:`~MigrationInfo.source_revision_ids` as a tuple of
|
||||
:class:`Revisions <.Revision>`."""
|
||||
return self.revision_map.get_revisions(self.source_revision_ids)
|
||||
|
||||
@property
|
||||
def destination_revisions(self):
|
||||
"""Get :attr:`~MigrationInfo.destination_revision_ids` as a tuple of
|
||||
:class:`Revisions <.Revision>`."""
|
||||
return self.revision_map.get_revisions(self.destination_revision_ids)
|
||||
|
||||
|
||||
class MigrationStep(object):
|
||||
@property
|
||||
def name(self):
|
||||
return self.migration_fn.__name__
|
||||
|
||||
@classmethod
|
||||
def upgrade_from_script(cls, revision_map, script):
|
||||
return RevisionStep(revision_map, script, True)
|
||||
|
||||
@classmethod
|
||||
def downgrade_from_script(cls, revision_map, script):
|
||||
return RevisionStep(revision_map, script, False)
|
||||
|
||||
@property
|
||||
def is_downgrade(self):
|
||||
return not self.is_upgrade
|
||||
|
||||
@property
|
||||
def short_log(self):
|
||||
return "%s %s -> %s" % (
|
||||
self.name,
|
||||
util.format_as_comma(self.from_revisions_no_deps),
|
||||
util.format_as_comma(self.to_revisions_no_deps)
|
||||
)
|
||||
|
||||
def __str__(self):
|
||||
if self.doc:
|
||||
return "%s %s -> %s, %s" % (
|
||||
self.name,
|
||||
util.format_as_comma(self.from_revisions_no_deps),
|
||||
util.format_as_comma(self.to_revisions_no_deps),
|
||||
self.doc
|
||||
)
|
||||
else:
|
||||
return self.short_log
|
||||
|
||||
|
||||
class RevisionStep(MigrationStep):
|
||||
def __init__(self, revision_map, revision, is_upgrade):
|
||||
self.revision_map = revision_map
|
||||
self.revision = revision
|
||||
self.is_upgrade = is_upgrade
|
||||
if is_upgrade:
|
||||
self.migration_fn = revision.module.upgrade
|
||||
else:
|
||||
self.migration_fn = revision.module.downgrade
|
||||
|
||||
def __repr__(self):
|
||||
return "RevisionStep(%r, is_upgrade=%r)" % (
|
||||
self.revision.revision, self.is_upgrade
|
||||
)
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, RevisionStep) and \
|
||||
other.revision == self.revision and \
|
||||
self.is_upgrade == other.is_upgrade
|
||||
|
||||
@property
|
||||
def doc(self):
|
||||
return self.revision.doc
|
||||
|
||||
@property
|
||||
def from_revisions(self):
|
||||
if self.is_upgrade:
|
||||
return self.revision._all_down_revisions
|
||||
else:
|
||||
return (self.revision.revision, )
|
||||
|
||||
@property
|
||||
def from_revisions_no_deps(self):
|
||||
if self.is_upgrade:
|
||||
return self.revision._versioned_down_revisions
|
||||
else:
|
||||
return (self.revision.revision, )
|
||||
|
||||
@property
|
||||
def to_revisions(self):
|
||||
if self.is_upgrade:
|
||||
return (self.revision.revision, )
|
||||
else:
|
||||
return self.revision._all_down_revisions
|
||||
|
||||
@property
|
||||
def to_revisions_no_deps(self):
|
||||
if self.is_upgrade:
|
||||
return (self.revision.revision, )
|
||||
else:
|
||||
return self.revision._versioned_down_revisions
|
||||
|
||||
@property
|
||||
def _has_scalar_down_revision(self):
|
||||
return len(self.revision._all_down_revisions) == 1
|
||||
|
||||
def should_delete_branch(self, heads):
|
||||
"""A delete is when we are a. in a downgrade and b.
|
||||
we are going to the "base" or we are going to a version that
|
||||
is implied as a dependency on another version that is remaining.
|
||||
|
||||
"""
|
||||
if not self.is_downgrade:
|
||||
return False
|
||||
|
||||
if self.revision.revision not in heads:
|
||||
return False
|
||||
|
||||
downrevs = self.revision._all_down_revisions
|
||||
|
||||
if not downrevs:
|
||||
# is a base
|
||||
return True
|
||||
else:
|
||||
# determine what the ultimate "to_revisions" for an
|
||||
# unmerge would be. If there are none, then we're a delete.
|
||||
to_revisions = self._unmerge_to_revisions(heads)
|
||||
return not to_revisions
|
||||
|
||||
def merge_branch_idents(self, heads):
|
||||
other_heads = set(heads).difference(self.from_revisions)
|
||||
|
||||
if other_heads:
|
||||
ancestors = set(
|
||||
r.revision for r in
|
||||
self.revision_map._get_ancestor_nodes(
|
||||
self.revision_map.get_revisions(other_heads),
|
||||
check=False
|
||||
)
|
||||
)
|
||||
from_revisions = list(
|
||||
set(self.from_revisions).difference(ancestors))
|
||||
else:
|
||||
from_revisions = list(self.from_revisions)
|
||||
|
||||
return (
|
||||
# delete revs, update from rev, update to rev
|
||||
list(from_revisions[0:-1]), from_revisions[-1],
|
||||
self.to_revisions[0]
|
||||
)
|
||||
|
||||
def _unmerge_to_revisions(self, heads):
|
||||
other_heads = set(heads).difference([self.revision.revision])
|
||||
if other_heads:
|
||||
ancestors = set(
|
||||
r.revision for r in
|
||||
self.revision_map._get_ancestor_nodes(
|
||||
self.revision_map.get_revisions(other_heads),
|
||||
check=False
|
||||
)
|
||||
)
|
||||
return list(set(self.to_revisions).difference(ancestors))
|
||||
else:
|
||||
return self.to_revisions
|
||||
|
||||
def unmerge_branch_idents(self, heads):
|
||||
to_revisions = self._unmerge_to_revisions(heads)
|
||||
|
||||
return (
|
||||
# update from rev, update to rev, insert revs
|
||||
self.from_revisions[0], to_revisions[-1],
|
||||
to_revisions[0:-1]
|
||||
)
|
||||
|
||||
def should_create_branch(self, heads):
|
||||
if not self.is_upgrade:
|
||||
return False
|
||||
|
||||
downrevs = self.revision._all_down_revisions
|
||||
|
||||
if not downrevs:
|
||||
# is a base
|
||||
return True
|
||||
else:
|
||||
# none of our downrevs are present, so...
|
||||
# we have to insert our version. This is true whether
|
||||
# or not there is only one downrev, or multiple (in the latter
|
||||
# case, we're a merge point.)
|
||||
if not heads.intersection(downrevs):
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
def should_merge_branches(self, heads):
|
||||
if not self.is_upgrade:
|
||||
return False
|
||||
|
||||
downrevs = self.revision._all_down_revisions
|
||||
|
||||
if len(downrevs) > 1 and \
|
||||
len(heads.intersection(downrevs)) > 1:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def should_unmerge_branches(self, heads):
|
||||
if not self.is_downgrade:
|
||||
return False
|
||||
|
||||
downrevs = self.revision._all_down_revisions
|
||||
|
||||
if self.revision.revision in heads and len(downrevs) > 1:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def update_version_num(self, heads):
|
||||
if not self._has_scalar_down_revision:
|
||||
downrev = heads.intersection(self.revision._all_down_revisions)
|
||||
assert len(downrev) == 1, \
|
||||
"Can't do an UPDATE because downrevision is ambiguous"
|
||||
down_revision = list(downrev)[0]
|
||||
else:
|
||||
down_revision = self.revision._all_down_revisions[0]
|
||||
|
||||
if self.is_upgrade:
|
||||
return down_revision, self.revision.revision
|
||||
else:
|
||||
return self.revision.revision, down_revision
|
||||
|
||||
@property
|
||||
def delete_version_num(self):
|
||||
return self.revision.revision
|
||||
|
||||
@property
|
||||
def insert_version_num(self):
|
||||
return self.revision.revision
|
||||
|
||||
@property
|
||||
def info(self):
|
||||
return MigrationInfo(revision_map=self.revision_map,
|
||||
up_revisions=self.revision.revision,
|
||||
down_revisions=self.revision._all_down_revisions,
|
||||
is_upgrade=self.is_upgrade, is_stamp=False)
|
||||
|
||||
|
||||
class StampStep(MigrationStep):
|
||||
def __init__(self, from_, to_, is_upgrade, branch_move, revision_map=None):
|
||||
self.from_ = util.to_tuple(from_, default=())
|
||||
self.to_ = util.to_tuple(to_, default=())
|
||||
self.is_upgrade = is_upgrade
|
||||
self.branch_move = branch_move
|
||||
self.migration_fn = self.stamp_revision
|
||||
self.revision_map = revision_map
|
||||
|
||||
doc = None
|
||||
|
||||
def stamp_revision(self, **kw):
|
||||
return None
|
||||
|
||||
def __eq__(self, other):
|
||||
return isinstance(other, StampStep) and \
|
||||
other.from_revisions == self.revisions and \
|
||||
other.to_revisions == self.to_revisions and \
|
||||
other.branch_move == self.branch_move and \
|
||||
self.is_upgrade == other.is_upgrade
|
||||
|
||||
@property
|
||||
def from_revisions(self):
|
||||
return self.from_
|
||||
|
||||
@property
|
||||
def to_revisions(self):
|
||||
return self.to_
|
||||
|
||||
@property
|
||||
def from_revisions_no_deps(self):
|
||||
return self.from_
|
||||
|
||||
@property
|
||||
def to_revisions_no_deps(self):
|
||||
return self.to_
|
||||
|
||||
@property
|
||||
def delete_version_num(self):
|
||||
assert len(self.from_) == 1
|
||||
return self.from_[0]
|
||||
|
||||
@property
|
||||
def insert_version_num(self):
|
||||
assert len(self.to_) == 1
|
||||
return self.to_[0]
|
||||
|
||||
def update_version_num(self, heads):
|
||||
assert len(self.from_) == 1
|
||||
assert len(self.to_) == 1
|
||||
return self.from_[0], self.to_[0]
|
||||
|
||||
def merge_branch_idents(self, heads):
|
||||
return (
|
||||
# delete revs, update from rev, update to rev
|
||||
list(self.from_[0:-1]), self.from_[-1],
|
||||
self.to_[0]
|
||||
)
|
||||
|
||||
def unmerge_branch_idents(self, heads):
|
||||
return (
|
||||
# update from rev, update to rev, insert revs
|
||||
self.from_[0], self.to_[-1],
|
||||
list(self.to_[0:-1])
|
||||
)
|
||||
|
||||
def should_delete_branch(self, heads):
|
||||
return self.is_downgrade and self.branch_move
|
||||
|
||||
def should_create_branch(self, heads):
|
||||
return self.is_upgrade and self.branch_move
|
||||
|
||||
def should_merge_branches(self, heads):
|
||||
return len(self.from_) > 1
|
||||
|
||||
def should_unmerge_branches(self, heads):
|
||||
return len(self.to_) > 1
|
||||
|
||||
@property
|
||||
def info(self):
|
||||
up, down = (self.to_, self.from_) if self.is_upgrade \
|
||||
else (self.from_, self.to_)
|
||||
return MigrationInfo(revision_map=self.revision_map,
|
||||
up_revisions=up,
|
||||
down_revisions=down,
|
||||
is_upgrade=self.is_upgrade,
|
||||
is_stamp=True)
|
|
@ -1,3 +0,0 @@
|
|||
from .base import ScriptDirectory, Script # noqa
|
||||
|
||||
__all__ = ['ScriptDirectory', 'Script']
|
|
@ -1,786 +0,0 @@
|
|||
import datetime
|
||||
from dateutil import tz
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
from .. import util
|
||||
from ..util import compat
|
||||
from . import revision
|
||||
from ..runtime import migration
|
||||
|
||||
from contextlib import contextmanager
|
||||
|
||||
_sourceless_rev_file = re.compile(r'(?!\.\#|__init__)(.*\.py)(c|o)?$')
|
||||
_only_source_rev_file = re.compile(r'(?!\.\#|__init__)(.*\.py)$')
|
||||
_legacy_rev = re.compile(r'([a-f0-9]+)\.py$')
|
||||
_mod_def_re = re.compile(r'(upgrade|downgrade)_([a-z0-9]+)')
|
||||
_slug_re = re.compile(r'\w+')
|
||||
_default_file_template = "%(rev)s_%(slug)s"
|
||||
_split_on_space_comma = re.compile(r',|(?: +)')
|
||||
|
||||
|
||||
class ScriptDirectory(object):
|
||||
|
||||
"""Provides operations upon an Alembic script directory.
|
||||
|
||||
This object is useful to get information as to current revisions,
|
||||
most notably being able to get at the "head" revision, for schemes
|
||||
that want to test if the current revision in the database is the most
|
||||
recent::
|
||||
|
||||
from alembic.script import ScriptDirectory
|
||||
from alembic.config import Config
|
||||
config = Config()
|
||||
config.set_main_option("script_location", "myapp:migrations")
|
||||
script = ScriptDirectory.from_config(config)
|
||||
|
||||
head_revision = script.get_current_head()
|
||||
|
||||
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, dir, file_template=_default_file_template,
|
||||
truncate_slug_length=40,
|
||||
version_locations=None,
|
||||
sourceless=False, output_encoding="utf-8",
|
||||
timezone=None):
|
||||
self.dir = dir
|
||||
self.file_template = file_template
|
||||
self.version_locations = version_locations
|
||||
self.truncate_slug_length = truncate_slug_length or 40
|
||||
self.sourceless = sourceless
|
||||
self.output_encoding = output_encoding
|
||||
self.revision_map = revision.RevisionMap(self._load_revisions)
|
||||
self.timezone = timezone
|
||||
|
||||
if not os.access(dir, os.F_OK):
|
||||
raise util.CommandError("Path doesn't exist: %r. Please use "
|
||||
"the 'init' command to create a new "
|
||||
"scripts folder." % dir)
|
||||
|
||||
@property
|
||||
def versions(self):
|
||||
loc = self._version_locations
|
||||
if len(loc) > 1:
|
||||
raise util.CommandError("Multiple version_locations present")
|
||||
else:
|
||||
return loc[0]
|
||||
|
||||
@util.memoized_property
|
||||
def _version_locations(self):
|
||||
if self.version_locations:
|
||||
return [
|
||||
os.path.abspath(util.coerce_resource_to_filename(location))
|
||||
for location in self.version_locations
|
||||
]
|
||||
else:
|
||||
return (os.path.abspath(os.path.join(self.dir, 'versions')),)
|
||||
|
||||
def _load_revisions(self):
|
||||
if self.version_locations:
|
||||
paths = [
|
||||
vers for vers in self._version_locations
|
||||
if os.path.exists(vers)]
|
||||
else:
|
||||
paths = [self.versions]
|
||||
|
||||
dupes = set()
|
||||
for vers in paths:
|
||||
for file_ in os.listdir(vers):
|
||||
path = os.path.realpath(os.path.join(vers, file_))
|
||||
if path in dupes:
|
||||
util.warn(
|
||||
"File %s loaded twice! ignoring. Please ensure "
|
||||
"version_locations is unique." % path
|
||||
)
|
||||
continue
|
||||
dupes.add(path)
|
||||
script = Script._from_filename(self, vers, file_)
|
||||
if script is None:
|
||||
continue
|
||||
yield script
|
||||
|
||||
@classmethod
|
||||
def from_config(cls, config):
|
||||
"""Produce a new :class:`.ScriptDirectory` given a :class:`.Config`
|
||||
instance.
|
||||
|
||||
The :class:`.Config` need only have the ``script_location`` key
|
||||
present.
|
||||
|
||||
"""
|
||||
script_location = config.get_main_option('script_location')
|
||||
if script_location is None:
|
||||
raise util.CommandError("No 'script_location' key "
|
||||
"found in configuration.")
|
||||
truncate_slug_length = config.get_main_option("truncate_slug_length")
|
||||
if truncate_slug_length is not None:
|
||||
truncate_slug_length = int(truncate_slug_length)
|
||||
|
||||
version_locations = config.get_main_option("version_locations")
|
||||
if version_locations:
|
||||
version_locations = _split_on_space_comma.split(version_locations)
|
||||
|
||||
return ScriptDirectory(
|
||||
util.coerce_resource_to_filename(script_location),
|
||||
file_template=config.get_main_option(
|
||||
'file_template',
|
||||
_default_file_template),
|
||||
truncate_slug_length=truncate_slug_length,
|
||||
sourceless=config.get_main_option("sourceless") == "true",
|
||||
output_encoding=config.get_main_option("output_encoding", "utf-8"),
|
||||
version_locations=version_locations,
|
||||
timezone=config.get_main_option("timezone")
|
||||
)
|
||||
|
||||
@contextmanager
|
||||
def _catch_revision_errors(
|
||||
self,
|
||||
ancestor=None, multiple_heads=None, start=None, end=None,
|
||||
resolution=None):
|
||||
try:
|
||||
yield
|
||||
except revision.RangeNotAncestorError as rna:
|
||||
if start is None:
|
||||
start = rna.lower
|
||||
if end is None:
|
||||
end = rna.upper
|
||||
if not ancestor:
|
||||
ancestor = (
|
||||
"Requested range %(start)s:%(end)s does not refer to "
|
||||
"ancestor/descendant revisions along the same branch"
|
||||
)
|
||||
ancestor = ancestor % {"start": start, "end": end}
|
||||
compat.raise_from_cause(util.CommandError(ancestor))
|
||||
except revision.MultipleHeads as mh:
|
||||
if not multiple_heads:
|
||||
multiple_heads = (
|
||||
"Multiple head revisions are present for given "
|
||||
"argument '%(head_arg)s'; please "
|
||||
"specify a specific target revision, "
|
||||
"'<branchname>@%(head_arg)s' to "
|
||||
"narrow to a specific head, or 'heads' for all heads")
|
||||
multiple_heads = multiple_heads % {
|
||||
"head_arg": end or mh.argument,
|
||||
"heads": util.format_as_comma(mh.heads)
|
||||
}
|
||||
compat.raise_from_cause(util.CommandError(multiple_heads))
|
||||
except revision.ResolutionError as re:
|
||||
if resolution is None:
|
||||
resolution = "Can't locate revision identified by '%s'" % (
|
||||
re.argument
|
||||
)
|
||||
compat.raise_from_cause(util.CommandError(resolution))
|
||||
except revision.RevisionError as err:
|
||||
compat.raise_from_cause(util.CommandError(err.args[0]))
|
||||
|
||||
def walk_revisions(self, base="base", head="heads"):
|
||||
"""Iterate through all revisions.
|
||||
|
||||
:param base: the base revision, or "base" to start from the
|
||||
empty revision.
|
||||
|
||||
:param head: the head revision; defaults to "heads" to indicate
|
||||
all head revisions. May also be "head" to indicate a single
|
||||
head revision.
|
||||
|
||||
.. versionchanged:: 0.7.0 the "head" identifier now refers to
|
||||
the head of a non-branched repository only; use "heads" to
|
||||
refer to the set of all head branches simultaneously.
|
||||
|
||||
"""
|
||||
with self._catch_revision_errors(start=base, end=head):
|
||||
for rev in self.revision_map.iterate_revisions(
|
||||
head, base, inclusive=True, assert_relative_length=False):
|
||||
yield rev
|
||||
|
||||
def get_revisions(self, id_):
|
||||
"""Return the :class:`.Script` instance with the given rev identifier,
|
||||
symbolic name, or sequence of identifiers.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
"""
|
||||
with self._catch_revision_errors():
|
||||
return self.revision_map.get_revisions(id_)
|
||||
|
||||
def get_all_current(self, id_):
|
||||
with self._catch_revision_errors():
|
||||
top_revs = set(self.revision_map.get_revisions(id_))
|
||||
top_revs.update(
|
||||
self.revision_map._get_ancestor_nodes(
|
||||
list(top_revs), include_dependencies=True)
|
||||
)
|
||||
top_revs = self.revision_map._filter_into_branch_heads(top_revs)
|
||||
return top_revs
|
||||
|
||||
def get_revision(self, id_):
|
||||
"""Return the :class:`.Script` instance with the given rev id.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.ScriptDirectory.get_revisions`
|
||||
|
||||
"""
|
||||
|
||||
with self._catch_revision_errors():
|
||||
return self.revision_map.get_revision(id_)
|
||||
|
||||
def as_revision_number(self, id_):
|
||||
"""Convert a symbolic revision, i.e. 'head' or 'base', into
|
||||
an actual revision number."""
|
||||
|
||||
with self._catch_revision_errors():
|
||||
rev, branch_name = self.revision_map._resolve_revision_number(id_)
|
||||
|
||||
if not rev:
|
||||
# convert () to None
|
||||
return None
|
||||
else:
|
||||
return rev[0]
|
||||
|
||||
def iterate_revisions(self, upper, lower):
|
||||
"""Iterate through script revisions, starting at the given
|
||||
upper revision identifier and ending at the lower.
|
||||
|
||||
The traversal uses strictly the `down_revision`
|
||||
marker inside each migration script, so
|
||||
it is a requirement that upper >= lower,
|
||||
else you'll get nothing back.
|
||||
|
||||
The iterator yields :class:`.Script` objects.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.RevisionMap.iterate_revisions`
|
||||
|
||||
"""
|
||||
return self.revision_map.iterate_revisions(upper, lower)
|
||||
|
||||
def get_current_head(self):
|
||||
"""Return the current head revision.
|
||||
|
||||
If the script directory has multiple heads
|
||||
due to branching, an error is raised;
|
||||
:meth:`.ScriptDirectory.get_heads` should be
|
||||
preferred.
|
||||
|
||||
:return: a string revision number.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.ScriptDirectory.get_heads`
|
||||
|
||||
"""
|
||||
with self._catch_revision_errors(multiple_heads=(
|
||||
'The script directory has multiple heads (due to branching).'
|
||||
'Please use get_heads(), or merge the branches using '
|
||||
'alembic merge.'
|
||||
)):
|
||||
return self.revision_map.get_current_head()
|
||||
|
||||
def get_heads(self):
|
||||
"""Return all "versioned head" revisions as strings.
|
||||
|
||||
This is normally a list of length one,
|
||||
unless branches are present. The
|
||||
:meth:`.ScriptDirectory.get_current_head()` method
|
||||
can be used normally when a script directory
|
||||
has only one head.
|
||||
|
||||
:return: a tuple of string revision numbers.
|
||||
"""
|
||||
return list(self.revision_map.heads)
|
||||
|
||||
def get_base(self):
|
||||
"""Return the "base" revision as a string.
|
||||
|
||||
This is the revision number of the script that
|
||||
has a ``down_revision`` of None.
|
||||
|
||||
If the script directory has multiple bases, an error is raised;
|
||||
:meth:`.ScriptDirectory.get_bases` should be
|
||||
preferred.
|
||||
|
||||
"""
|
||||
bases = self.get_bases()
|
||||
if len(bases) > 1:
|
||||
raise util.CommandError(
|
||||
"The script directory has multiple bases. "
|
||||
"Please use get_bases().")
|
||||
elif bases:
|
||||
return bases[0]
|
||||
else:
|
||||
return None
|
||||
|
||||
def get_bases(self):
|
||||
"""return all "base" revisions as strings.
|
||||
|
||||
This is the revision number of all scripts that
|
||||
have a ``down_revision`` of None.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
"""
|
||||
return list(self.revision_map.bases)
|
||||
|
||||
def _upgrade_revs(self, destination, current_rev):
|
||||
with self._catch_revision_errors(
|
||||
ancestor="Destination %(end)s is not a valid upgrade "
|
||||
"target from current head(s)", end=destination):
|
||||
revs = self.revision_map.iterate_revisions(
|
||||
destination, current_rev, implicit_base=True)
|
||||
revs = list(revs)
|
||||
return [
|
||||
migration.MigrationStep.upgrade_from_script(
|
||||
self.revision_map, script)
|
||||
for script in reversed(list(revs))
|
||||
]
|
||||
|
||||
def _downgrade_revs(self, destination, current_rev):
|
||||
with self._catch_revision_errors(
|
||||
ancestor="Destination %(end)s is not a valid downgrade "
|
||||
"target from current head(s)", end=destination):
|
||||
revs = self.revision_map.iterate_revisions(
|
||||
current_rev, destination, select_for_downgrade=True)
|
||||
return [
|
||||
migration.MigrationStep.downgrade_from_script(
|
||||
self.revision_map, script)
|
||||
for script in revs
|
||||
]
|
||||
|
||||
def _stamp_revs(self, revision, heads):
|
||||
with self._catch_revision_errors(
|
||||
multiple_heads="Multiple heads are present; please specify a "
|
||||
"single target revision"):
|
||||
|
||||
heads = self.get_revisions(heads)
|
||||
|
||||
# filter for lineage will resolve things like
|
||||
# branchname@base, version@base, etc.
|
||||
filtered_heads = self.revision_map.filter_for_lineage(
|
||||
heads, revision, include_dependencies=True)
|
||||
|
||||
steps = []
|
||||
|
||||
dests = self.get_revisions(revision) or [None]
|
||||
for dest in dests:
|
||||
if dest is None:
|
||||
# dest is 'base'. Return a "delete branch" migration
|
||||
# for all applicable heads.
|
||||
steps.extend([
|
||||
migration.StampStep(head.revision, None, False, True,
|
||||
self.revision_map)
|
||||
for head in filtered_heads
|
||||
])
|
||||
continue
|
||||
elif dest in filtered_heads:
|
||||
# the dest is already in the version table, do nothing.
|
||||
continue
|
||||
|
||||
# figure out if the dest is a descendant or an
|
||||
# ancestor of the selected nodes
|
||||
descendants = set(
|
||||
self.revision_map._get_descendant_nodes([dest]))
|
||||
ancestors = set(self.revision_map._get_ancestor_nodes([dest]))
|
||||
|
||||
if descendants.intersection(filtered_heads):
|
||||
# heads are above the target, so this is a downgrade.
|
||||
# we can treat them as a "merge", single step.
|
||||
assert not ancestors.intersection(filtered_heads)
|
||||
todo_heads = [head.revision for head in filtered_heads]
|
||||
step = migration.StampStep(
|
||||
todo_heads, dest.revision, False, False,
|
||||
self.revision_map)
|
||||
steps.append(step)
|
||||
continue
|
||||
elif ancestors.intersection(filtered_heads):
|
||||
# heads are below the target, so this is an upgrade.
|
||||
# we can treat them as a "merge", single step.
|
||||
todo_heads = [head.revision for head in filtered_heads]
|
||||
step = migration.StampStep(
|
||||
todo_heads, dest.revision, True, False,
|
||||
self.revision_map)
|
||||
steps.append(step)
|
||||
continue
|
||||
else:
|
||||
# destination is in a branch not represented,
|
||||
# treat it as new branch
|
||||
step = migration.StampStep((), dest.revision, True, True,
|
||||
self.revision_map)
|
||||
steps.append(step)
|
||||
continue
|
||||
return steps
|
||||
|
||||
def run_env(self):
|
||||
"""Run the script environment.
|
||||
|
||||
This basically runs the ``env.py`` script present
|
||||
in the migration environment. It is called exclusively
|
||||
by the command functions in :mod:`alembic.command`.
|
||||
|
||||
|
||||
"""
|
||||
util.load_python_file(self.dir, 'env.py')
|
||||
|
||||
@property
|
||||
def env_py_location(self):
|
||||
return os.path.abspath(os.path.join(self.dir, "env.py"))
|
||||
|
||||
def _generate_template(self, src, dest, **kw):
|
||||
util.status("Generating %s" % os.path.abspath(dest),
|
||||
util.template_to_file,
|
||||
src,
|
||||
dest,
|
||||
self.output_encoding,
|
||||
**kw
|
||||
)
|
||||
|
||||
def _copy_file(self, src, dest):
|
||||
util.status("Generating %s" % os.path.abspath(dest),
|
||||
shutil.copy,
|
||||
src, dest)
|
||||
|
||||
def _ensure_directory(self, path):
|
||||
path = os.path.abspath(path)
|
||||
if not os.path.exists(path):
|
||||
util.status(
|
||||
"Creating directory %s" % path,
|
||||
os.makedirs, path)
|
||||
|
||||
def _generate_create_date(self):
|
||||
if self.timezone is not None:
|
||||
tzinfo = tz.gettz(self.timezone.upper())
|
||||
if tzinfo is None:
|
||||
raise util.CommandError(
|
||||
"Can't locate timezone: %s" % self.timezone)
|
||||
create_date = datetime.datetime.utcnow().replace(
|
||||
tzinfo=tz.tzutc()).astimezone(tzinfo)
|
||||
else:
|
||||
create_date = datetime.datetime.now()
|
||||
return create_date
|
||||
|
||||
def generate_revision(
|
||||
self, revid, message, head=None,
|
||||
refresh=False, splice=False, branch_labels=None,
|
||||
version_path=None, depends_on=None, **kw):
|
||||
"""Generate a new revision file.
|
||||
|
||||
This runs the ``script.py.mako`` template, given
|
||||
template arguments, and creates a new file.
|
||||
|
||||
:param revid: String revision id. Typically this
|
||||
comes from ``alembic.util.rev_id()``.
|
||||
:param message: the revision message, the one passed
|
||||
by the -m argument to the ``revision`` command.
|
||||
:param head: the head revision to generate against. Defaults
|
||||
to the current "head" if no branches are present, else raises
|
||||
an exception.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
:param splice: if True, allow the "head" version to not be an
|
||||
actual head; otherwise, the selected head must be a head
|
||||
(e.g. endpoint) revision.
|
||||
:param refresh: deprecated.
|
||||
|
||||
"""
|
||||
if head is None:
|
||||
head = "head"
|
||||
|
||||
with self._catch_revision_errors(multiple_heads=(
|
||||
"Multiple heads are present; please specify the head "
|
||||
"revision on which the new revision should be based, "
|
||||
"or perform a merge."
|
||||
)):
|
||||
heads = self.revision_map.get_revisions(head)
|
||||
|
||||
if len(set(heads)) != len(heads):
|
||||
raise util.CommandError("Duplicate head revisions specified")
|
||||
|
||||
create_date = self._generate_create_date()
|
||||
|
||||
if version_path is None:
|
||||
if len(self._version_locations) > 1:
|
||||
for head in heads:
|
||||
if head is not None:
|
||||
version_path = os.path.dirname(head.path)
|
||||
break
|
||||
else:
|
||||
raise util.CommandError(
|
||||
"Multiple version locations present, "
|
||||
"please specify --version-path")
|
||||
else:
|
||||
version_path = self.versions
|
||||
|
||||
norm_path = os.path.normpath(os.path.abspath(version_path))
|
||||
for vers_path in self._version_locations:
|
||||
if os.path.normpath(vers_path) == norm_path:
|
||||
break
|
||||
else:
|
||||
raise util.CommandError(
|
||||
"Path %s is not represented in current "
|
||||
"version locations" % version_path)
|
||||
|
||||
if self.version_locations:
|
||||
self._ensure_directory(version_path)
|
||||
|
||||
path = self._rev_path(version_path, revid, message, create_date)
|
||||
|
||||
if not splice:
|
||||
for head in heads:
|
||||
if head is not None and not head.is_head:
|
||||
raise util.CommandError(
|
||||
"Revision %s is not a head revision; please specify "
|
||||
"--splice to create a new branch from this revision"
|
||||
% head.revision)
|
||||
|
||||
if depends_on:
|
||||
with self._catch_revision_errors():
|
||||
depends_on = [
|
||||
dep
|
||||
if dep in rev.branch_labels # maintain branch labels
|
||||
else rev.revision # resolve partial revision identifiers
|
||||
for rev, dep in [
|
||||
(self.revision_map.get_revision(dep), dep)
|
||||
for dep in util.to_list(depends_on)
|
||||
]
|
||||
|
||||
]
|
||||
|
||||
self._generate_template(
|
||||
os.path.join(self.dir, "script.py.mako"),
|
||||
path,
|
||||
up_revision=str(revid),
|
||||
down_revision=revision.tuple_rev_as_scalar(
|
||||
tuple(h.revision if h is not None else None for h in heads)),
|
||||
branch_labels=util.to_tuple(branch_labels),
|
||||
depends_on=revision.tuple_rev_as_scalar(depends_on),
|
||||
create_date=create_date,
|
||||
comma=util.format_as_comma,
|
||||
message=message if message is not None else ("empty message"),
|
||||
**kw
|
||||
)
|
||||
script = Script._from_path(self, path)
|
||||
if branch_labels and not script.branch_labels:
|
||||
raise util.CommandError(
|
||||
"Version %s specified branch_labels %s, however the "
|
||||
"migration file %s does not have them; have you upgraded "
|
||||
"your script.py.mako to include the "
|
||||
"'branch_labels' section?" % (
|
||||
script.revision, branch_labels, script.path
|
||||
))
|
||||
|
||||
self.revision_map.add_revision(script)
|
||||
return script
|
||||
|
||||
def _rev_path(self, path, rev_id, message, create_date):
|
||||
slug = "_".join(_slug_re.findall(message or "")).lower()
|
||||
if len(slug) > self.truncate_slug_length:
|
||||
slug = slug[:self.truncate_slug_length].rsplit('_', 1)[0] + '_'
|
||||
filename = "%s.py" % (
|
||||
self.file_template % {
|
||||
'rev': rev_id,
|
||||
'slug': slug,
|
||||
'year': create_date.year,
|
||||
'month': create_date.month,
|
||||
'day': create_date.day,
|
||||
'hour': create_date.hour,
|
||||
'minute': create_date.minute,
|
||||
'second': create_date.second
|
||||
}
|
||||
)
|
||||
return os.path.join(path, filename)
|
||||
|
||||
|
||||
class Script(revision.Revision):
|
||||
|
||||
"""Represent a single revision file in a ``versions/`` directory.
|
||||
|
||||
The :class:`.Script` instance is returned by methods
|
||||
such as :meth:`.ScriptDirectory.iterate_revisions`.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, module, rev_id, path):
|
||||
self.module = module
|
||||
self.path = path
|
||||
super(Script, self).__init__(
|
||||
rev_id,
|
||||
module.down_revision,
|
||||
branch_labels=util.to_tuple(
|
||||
getattr(module, 'branch_labels', None), default=()),
|
||||
dependencies=util.to_tuple(
|
||||
getattr(module, 'depends_on', None), default=())
|
||||
)
|
||||
|
||||
module = None
|
||||
"""The Python module representing the actual script itself."""
|
||||
|
||||
path = None
|
||||
"""Filesystem path of the script."""
|
||||
|
||||
@property
|
||||
def doc(self):
|
||||
"""Return the docstring given in the script."""
|
||||
|
||||
return re.split("\n\n", self.longdoc)[0]
|
||||
|
||||
@property
|
||||
def longdoc(self):
|
||||
"""Return the docstring given in the script."""
|
||||
|
||||
doc = self.module.__doc__
|
||||
if doc:
|
||||
if hasattr(self.module, "_alembic_source_encoding"):
|
||||
doc = doc.decode(self.module._alembic_source_encoding)
|
||||
return doc.strip()
|
||||
else:
|
||||
return ""
|
||||
|
||||
@property
|
||||
def log_entry(self):
|
||||
entry = "Rev: %s%s%s%s\n" % (
|
||||
self.revision,
|
||||
" (head)" if self.is_head else "",
|
||||
" (branchpoint)" if self.is_branch_point else "",
|
||||
" (mergepoint)" if self.is_merge_point else "",
|
||||
)
|
||||
if self.is_merge_point:
|
||||
entry += "Merges: %s\n" % (self._format_down_revision(), )
|
||||
else:
|
||||
entry += "Parent: %s\n" % (self._format_down_revision(), )
|
||||
|
||||
if self.dependencies:
|
||||
entry += "Also depends on: %s\n" % (
|
||||
util.format_as_comma(self.dependencies))
|
||||
|
||||
if self.is_branch_point:
|
||||
entry += "Branches into: %s\n" % (
|
||||
util.format_as_comma(self.nextrev))
|
||||
|
||||
if self.branch_labels:
|
||||
entry += "Branch names: %s\n" % (
|
||||
util.format_as_comma(self.branch_labels), )
|
||||
|
||||
entry += "Path: %s\n" % (self.path,)
|
||||
|
||||
entry += "\n%s\n" % (
|
||||
"\n".join(
|
||||
" %s" % para
|
||||
for para in self.longdoc.splitlines()
|
||||
)
|
||||
)
|
||||
return entry
|
||||
|
||||
def __str__(self):
|
||||
return "%s -> %s%s%s%s, %s" % (
|
||||
self._format_down_revision(),
|
||||
self.revision,
|
||||
" (head)" if self.is_head else "",
|
||||
" (branchpoint)" if self.is_branch_point else "",
|
||||
" (mergepoint)" if self.is_merge_point else "",
|
||||
self.doc)
|
||||
|
||||
def _head_only(
|
||||
self, include_branches=False, include_doc=False,
|
||||
include_parents=False, tree_indicators=True,
|
||||
head_indicators=True):
|
||||
text = self.revision
|
||||
if include_parents:
|
||||
if self.dependencies:
|
||||
text = "%s (%s) -> %s" % (
|
||||
self._format_down_revision(),
|
||||
util.format_as_comma(self.dependencies),
|
||||
text
|
||||
)
|
||||
else:
|
||||
text = "%s -> %s" % (
|
||||
self._format_down_revision(), text)
|
||||
if include_branches and self.branch_labels:
|
||||
text += " (%s)" % util.format_as_comma(self.branch_labels)
|
||||
if head_indicators or tree_indicators:
|
||||
text += "%s%s" % (
|
||||
" (head)" if self._is_real_head else "",
|
||||
" (effective head)" if self.is_head and
|
||||
not self._is_real_head else ""
|
||||
)
|
||||
if tree_indicators:
|
||||
text += "%s%s" % (
|
||||
" (branchpoint)" if self.is_branch_point else "",
|
||||
" (mergepoint)" if self.is_merge_point else ""
|
||||
)
|
||||
if include_doc:
|
||||
text += ", %s" % self.doc
|
||||
return text
|
||||
|
||||
def cmd_format(
|
||||
self,
|
||||
verbose,
|
||||
include_branches=False, include_doc=False,
|
||||
include_parents=False, tree_indicators=True):
|
||||
if verbose:
|
||||
return self.log_entry
|
||||
else:
|
||||
return self._head_only(
|
||||
include_branches, include_doc,
|
||||
include_parents, tree_indicators)
|
||||
|
||||
def _format_down_revision(self):
|
||||
if not self.down_revision:
|
||||
return "<base>"
|
||||
else:
|
||||
return util.format_as_comma(self._versioned_down_revisions)
|
||||
|
||||
@classmethod
|
||||
def _from_path(cls, scriptdir, path):
|
||||
dir_, filename = os.path.split(path)
|
||||
return cls._from_filename(scriptdir, dir_, filename)
|
||||
|
||||
@classmethod
|
||||
def _from_filename(cls, scriptdir, dir_, filename):
|
||||
if scriptdir.sourceless:
|
||||
py_match = _sourceless_rev_file.match(filename)
|
||||
else:
|
||||
py_match = _only_source_rev_file.match(filename)
|
||||
|
||||
if not py_match:
|
||||
return None
|
||||
|
||||
py_filename = py_match.group(1)
|
||||
|
||||
if scriptdir.sourceless:
|
||||
is_c = py_match.group(2) == 'c'
|
||||
is_o = py_match.group(2) == 'o'
|
||||
else:
|
||||
is_c = is_o = False
|
||||
|
||||
if is_o or is_c:
|
||||
py_exists = os.path.exists(os.path.join(dir_, py_filename))
|
||||
pyc_exists = os.path.exists(os.path.join(dir_, py_filename + "c"))
|
||||
|
||||
# prefer .py over .pyc because we'd like to get the
|
||||
# source encoding; prefer .pyc over .pyo because we'd like to
|
||||
# have the docstrings which a -OO file would not have
|
||||
if py_exists or is_o and pyc_exists:
|
||||
return None
|
||||
|
||||
module = util.load_python_file(dir_, filename)
|
||||
|
||||
if not hasattr(module, "revision"):
|
||||
# attempt to get the revision id from the script name,
|
||||
# this for legacy only
|
||||
m = _legacy_rev.match(filename)
|
||||
if not m:
|
||||
raise util.CommandError(
|
||||
"Could not determine revision id from filename %s. "
|
||||
"Be sure the 'revision' variable is "
|
||||
"declared inside the script (please see 'Upgrading "
|
||||
"from Alembic 0.1 to 0.2' in the documentation)."
|
||||
% filename)
|
||||
else:
|
||||
revision = m.group(1)
|
||||
else:
|
||||
revision = module.revision
|
||||
return Script(module, revision, os.path.join(dir_, filename))
|
|
@ -1,926 +0,0 @@
|
|||
import re
|
||||
import collections
|
||||
|
||||
from .. import util
|
||||
from sqlalchemy import util as sqlautil
|
||||
from ..util import compat
|
||||
|
||||
_relative_destination = re.compile(r'(?:(.+?)@)?(\w+)?((?:\+|-)\d+)')
|
||||
|
||||
|
||||
class RevisionError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class RangeNotAncestorError(RevisionError):
|
||||
def __init__(self, lower, upper):
|
||||
self.lower = lower
|
||||
self.upper = upper
|
||||
super(RangeNotAncestorError, self).__init__(
|
||||
"Revision %s is not an ancestor of revision %s" %
|
||||
(lower or "base", upper or "base")
|
||||
)
|
||||
|
||||
|
||||
class MultipleHeads(RevisionError):
|
||||
def __init__(self, heads, argument):
|
||||
self.heads = heads
|
||||
self.argument = argument
|
||||
super(MultipleHeads, self).__init__(
|
||||
"Multiple heads are present for given argument '%s'; "
|
||||
"%s" % (argument, ", ".join(heads))
|
||||
)
|
||||
|
||||
|
||||
class ResolutionError(RevisionError):
|
||||
def __init__(self, message, argument):
|
||||
super(ResolutionError, self).__init__(message)
|
||||
self.argument = argument
|
||||
|
||||
|
||||
class RevisionMap(object):
|
||||
"""Maintains a map of :class:`.Revision` objects.
|
||||
|
||||
:class:`.RevisionMap` is used by :class:`.ScriptDirectory` to maintain
|
||||
and traverse the collection of :class:`.Script` objects, which are
|
||||
themselves instances of :class:`.Revision`.
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, generator):
|
||||
"""Construct a new :class:`.RevisionMap`.
|
||||
|
||||
:param generator: a zero-arg callable that will generate an iterable
|
||||
of :class:`.Revision` instances to be used. These are typically
|
||||
:class:`.Script` subclasses within regular Alembic use.
|
||||
|
||||
"""
|
||||
self._generator = generator
|
||||
|
||||
@util.memoized_property
|
||||
def heads(self):
|
||||
"""All "head" revisions as strings.
|
||||
|
||||
This is normally a tuple of length one,
|
||||
unless unmerged branches are present.
|
||||
|
||||
:return: a tuple of string revision numbers.
|
||||
|
||||
"""
|
||||
self._revision_map
|
||||
return self.heads
|
||||
|
||||
@util.memoized_property
|
||||
def bases(self):
|
||||
"""All "base" revisions as strings.
|
||||
|
||||
These are revisions that have a ``down_revision`` of None,
|
||||
or empty tuple.
|
||||
|
||||
:return: a tuple of string revision numbers.
|
||||
|
||||
"""
|
||||
self._revision_map
|
||||
return self.bases
|
||||
|
||||
@util.memoized_property
|
||||
def _real_heads(self):
|
||||
"""All "real" head revisions as strings.
|
||||
|
||||
:return: a tuple of string revision numbers.
|
||||
|
||||
"""
|
||||
self._revision_map
|
||||
return self._real_heads
|
||||
|
||||
@util.memoized_property
|
||||
def _real_bases(self):
|
||||
"""All "real" base revisions as strings.
|
||||
|
||||
:return: a tuple of string revision numbers.
|
||||
|
||||
"""
|
||||
self._revision_map
|
||||
return self._real_bases
|
||||
|
||||
@util.memoized_property
|
||||
def _revision_map(self):
|
||||
"""memoized attribute, initializes the revision map from the
|
||||
initial collection.
|
||||
|
||||
"""
|
||||
map_ = {}
|
||||
|
||||
heads = sqlautil.OrderedSet()
|
||||
_real_heads = sqlautil.OrderedSet()
|
||||
self.bases = ()
|
||||
self._real_bases = ()
|
||||
|
||||
has_branch_labels = set()
|
||||
has_depends_on = set()
|
||||
for revision in self._generator():
|
||||
|
||||
if revision.revision in map_:
|
||||
util.warn("Revision %s is present more than once" %
|
||||
revision.revision)
|
||||
map_[revision.revision] = revision
|
||||
if revision.branch_labels:
|
||||
has_branch_labels.add(revision)
|
||||
if revision.dependencies:
|
||||
has_depends_on.add(revision)
|
||||
heads.add(revision.revision)
|
||||
_real_heads.add(revision.revision)
|
||||
if revision.is_base:
|
||||
self.bases += (revision.revision, )
|
||||
if revision._is_real_base:
|
||||
self._real_bases += (revision.revision, )
|
||||
|
||||
# add the branch_labels to the map_. We'll need these
|
||||
# to resolve the dependencies.
|
||||
for revision in has_branch_labels:
|
||||
self._map_branch_labels(revision, map_)
|
||||
|
||||
for revision in has_depends_on:
|
||||
self._add_depends_on(revision, map_)
|
||||
|
||||
for rev in map_.values():
|
||||
for downrev in rev._all_down_revisions:
|
||||
if downrev not in map_:
|
||||
util.warn("Revision %s referenced from %s is not present"
|
||||
% (downrev, rev))
|
||||
down_revision = map_[downrev]
|
||||
down_revision.add_nextrev(rev)
|
||||
if downrev in rev._versioned_down_revisions:
|
||||
heads.discard(downrev)
|
||||
_real_heads.discard(downrev)
|
||||
|
||||
map_[None] = map_[()] = None
|
||||
self.heads = tuple(heads)
|
||||
self._real_heads = tuple(_real_heads)
|
||||
|
||||
for revision in has_branch_labels:
|
||||
self._add_branches(revision, map_, map_branch_labels=False)
|
||||
return map_
|
||||
|
||||
def _map_branch_labels(self, revision, map_):
|
||||
if revision.branch_labels:
|
||||
for branch_label in revision._orig_branch_labels:
|
||||
if branch_label in map_:
|
||||
raise RevisionError(
|
||||
"Branch name '%s' in revision %s already "
|
||||
"used by revision %s" %
|
||||
(branch_label, revision.revision,
|
||||
map_[branch_label].revision)
|
||||
)
|
||||
map_[branch_label] = revision
|
||||
|
||||
def _add_branches(self, revision, map_, map_branch_labels=True):
|
||||
if map_branch_labels:
|
||||
self._map_branch_labels(revision, map_)
|
||||
|
||||
if revision.branch_labels:
|
||||
revision.branch_labels.update(revision.branch_labels)
|
||||
for node in self._get_descendant_nodes(
|
||||
[revision], map_, include_dependencies=False):
|
||||
node.branch_labels.update(revision.branch_labels)
|
||||
|
||||
parent = node
|
||||
while parent and \
|
||||
not parent._is_real_branch_point and \
|
||||
not parent.is_merge_point:
|
||||
|
||||
parent.branch_labels.update(revision.branch_labels)
|
||||
if parent.down_revision:
|
||||
parent = map_[parent.down_revision]
|
||||
else:
|
||||
break
|
||||
|
||||
def _add_depends_on(self, revision, map_):
|
||||
if revision.dependencies:
|
||||
deps = [map_[dep] for dep in util.to_tuple(revision.dependencies)]
|
||||
revision._resolved_dependencies = tuple([d.revision for d in deps])
|
||||
|
||||
|
||||
def add_revision(self, revision, _replace=False):
|
||||
"""add a single revision to an existing map.
|
||||
|
||||
This method is for single-revision use cases, it's not
|
||||
appropriate for fully populating an entire revision map.
|
||||
|
||||
"""
|
||||
map_ = self._revision_map
|
||||
if not _replace and revision.revision in map_:
|
||||
util.warn("Revision %s is present more than once" %
|
||||
revision.revision)
|
||||
elif _replace and revision.revision not in map_:
|
||||
raise Exception("revision %s not in map" % revision.revision)
|
||||
|
||||
map_[revision.revision] = revision
|
||||
self._add_branches(revision, map_)
|
||||
self._add_depends_on(revision, map_)
|
||||
|
||||
if revision.is_base:
|
||||
self.bases += (revision.revision, )
|
||||
if revision._is_real_base:
|
||||
self._real_bases += (revision.revision, )
|
||||
for downrev in revision._all_down_revisions:
|
||||
if downrev not in map_:
|
||||
util.warn(
|
||||
"Revision %s referenced from %s is not present"
|
||||
% (downrev, revision)
|
||||
)
|
||||
map_[downrev].add_nextrev(revision)
|
||||
if revision._is_real_head:
|
||||
self._real_heads = tuple(
|
||||
head for head in self._real_heads
|
||||
if head not in
|
||||
set(revision._all_down_revisions).union([revision.revision])
|
||||
) + (revision.revision,)
|
||||
if revision.is_head:
|
||||
self.heads = tuple(
|
||||
head for head in self.heads
|
||||
if head not in
|
||||
set(revision._versioned_down_revisions).union([revision.revision])
|
||||
) + (revision.revision,)
|
||||
|
||||
def get_current_head(self, branch_label=None):
|
||||
"""Return the current head revision.
|
||||
|
||||
If the script directory has multiple heads
|
||||
due to branching, an error is raised;
|
||||
:meth:`.ScriptDirectory.get_heads` should be
|
||||
preferred.
|
||||
|
||||
:param branch_label: optional branch name which will limit the
|
||||
heads considered to those which include that branch_label.
|
||||
|
||||
:return: a string revision number.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:meth:`.ScriptDirectory.get_heads`
|
||||
|
||||
"""
|
||||
current_heads = self.heads
|
||||
if branch_label:
|
||||
current_heads = self.filter_for_lineage(current_heads, branch_label)
|
||||
if len(current_heads) > 1:
|
||||
raise MultipleHeads(
|
||||
current_heads,
|
||||
"%s@head" % branch_label if branch_label else "head")
|
||||
|
||||
if current_heads:
|
||||
return current_heads[0]
|
||||
else:
|
||||
return None
|
||||
|
||||
def _get_base_revisions(self, identifier):
|
||||
return self.filter_for_lineage(self.bases, identifier)
|
||||
|
||||
def get_revisions(self, id_):
|
||||
"""Return the :class:`.Revision` instances with the given rev id
|
||||
or identifiers.
|
||||
|
||||
May be given a single identifier, a sequence of identifiers, or the
|
||||
special symbols "head" or "base". The result is a tuple of one
|
||||
or more identifiers, or an empty tuple in the case of "base".
|
||||
|
||||
In the cases where 'head', 'heads' is requested and the
|
||||
revision map is empty, returns an empty tuple.
|
||||
|
||||
Supports partial identifiers, where the given identifier
|
||||
is matched against all identifiers that start with the given
|
||||
characters; if there is exactly one match, that determines the
|
||||
full revision.
|
||||
|
||||
"""
|
||||
if isinstance(id_, (list, tuple, set, frozenset)):
|
||||
return sum([self.get_revisions(id_elem) for id_elem in id_], ())
|
||||
else:
|
||||
resolved_id, branch_label = self._resolve_revision_number(id_)
|
||||
return tuple(
|
||||
self._revision_for_ident(rev_id, branch_label)
|
||||
for rev_id in resolved_id)
|
||||
|
||||
def get_revision(self, id_):
|
||||
"""Return the :class:`.Revision` instance with the given rev id.
|
||||
|
||||
If a symbolic name such as "head" or "base" is given, resolves
|
||||
the identifier into the current head or base revision. If the symbolic
|
||||
name refers to multiples, :class:`.MultipleHeads` is raised.
|
||||
|
||||
Supports partial identifiers, where the given identifier
|
||||
is matched against all identifiers that start with the given
|
||||
characters; if there is exactly one match, that determines the
|
||||
full revision.
|
||||
|
||||
"""
|
||||
|
||||
resolved_id, branch_label = self._resolve_revision_number(id_)
|
||||
if len(resolved_id) > 1:
|
||||
raise MultipleHeads(resolved_id, id_)
|
||||
elif resolved_id:
|
||||
resolved_id = resolved_id[0]
|
||||
|
||||
return self._revision_for_ident(resolved_id, branch_label)
|
||||
|
||||
def _resolve_branch(self, branch_label):
|
||||
try:
|
||||
branch_rev = self._revision_map[branch_label]
|
||||
except KeyError:
|
||||
try:
|
||||
nonbranch_rev = self._revision_for_ident(branch_label)
|
||||
except ResolutionError:
|
||||
raise ResolutionError(
|
||||
"No such branch: '%s'" % branch_label, branch_label)
|
||||
else:
|
||||
return nonbranch_rev
|
||||
else:
|
||||
return branch_rev
|
||||
|
||||
def _revision_for_ident(self, resolved_id, check_branch=None):
|
||||
if check_branch:
|
||||
branch_rev = self._resolve_branch(check_branch)
|
||||
else:
|
||||
branch_rev = None
|
||||
|
||||
try:
|
||||
revision = self._revision_map[resolved_id]
|
||||
except KeyError:
|
||||
# do a partial lookup
|
||||
revs = [x for x in self._revision_map
|
||||
if x and x.startswith(resolved_id)]
|
||||
if branch_rev:
|
||||
revs = self.filter_for_lineage(revs, check_branch)
|
||||
if not revs:
|
||||
raise ResolutionError(
|
||||
"No such revision or branch '%s'" % resolved_id,
|
||||
resolved_id)
|
||||
elif len(revs) > 1:
|
||||
raise ResolutionError(
|
||||
"Multiple revisions start "
|
||||
"with '%s': %s..." % (
|
||||
resolved_id,
|
||||
", ".join("'%s'" % r for r in revs[0:3])
|
||||
), resolved_id)
|
||||
else:
|
||||
revision = self._revision_map[revs[0]]
|
||||
|
||||
if check_branch and revision is not None:
|
||||
if not self._shares_lineage(
|
||||
revision.revision, branch_rev.revision):
|
||||
raise ResolutionError(
|
||||
"Revision %s is not a member of branch '%s'" %
|
||||
(revision.revision, check_branch), resolved_id)
|
||||
return revision
|
||||
|
||||
def _filter_into_branch_heads(self, targets):
|
||||
targets = set(targets)
|
||||
|
||||
for rev in list(targets):
|
||||
if targets.intersection(
|
||||
self._get_descendant_nodes(
|
||||
[rev], include_dependencies=False)).\
|
||||
difference([rev]):
|
||||
targets.discard(rev)
|
||||
return targets
|
||||
|
||||
def filter_for_lineage(
|
||||
self, targets, check_against, include_dependencies=False):
|
||||
id_, branch_label = self._resolve_revision_number(check_against)
|
||||
|
||||
shares = []
|
||||
if branch_label:
|
||||
shares.append(branch_label)
|
||||
if id_:
|
||||
shares.extend(id_)
|
||||
|
||||
return [
|
||||
tg for tg in targets
|
||||
if self._shares_lineage(
|
||||
tg, shares, include_dependencies=include_dependencies)]
|
||||
|
||||
def _shares_lineage(
|
||||
self, target, test_against_revs, include_dependencies=False):
|
||||
if not test_against_revs:
|
||||
return True
|
||||
if not isinstance(target, Revision):
|
||||
target = self._revision_for_ident(target)
|
||||
|
||||
test_against_revs = [
|
||||
self._revision_for_ident(test_against_rev)
|
||||
if not isinstance(test_against_rev, Revision)
|
||||
else test_against_rev
|
||||
for test_against_rev
|
||||
in util.to_tuple(test_against_revs, default=())
|
||||
]
|
||||
|
||||
return bool(
|
||||
set(self._get_descendant_nodes([target],
|
||||
include_dependencies=include_dependencies))
|
||||
.union(self._get_ancestor_nodes([target],
|
||||
include_dependencies=include_dependencies))
|
||||
.intersection(test_against_revs)
|
||||
)
|
||||
|
||||
def _resolve_revision_number(self, id_):
|
||||
if isinstance(id_, compat.string_types) and "@" in id_:
|
||||
branch_label, id_ = id_.split('@', 1)
|
||||
else:
|
||||
branch_label = None
|
||||
|
||||
# ensure map is loaded
|
||||
self._revision_map
|
||||
if id_ == 'heads':
|
||||
if branch_label:
|
||||
return self.filter_for_lineage(
|
||||
self.heads, branch_label), branch_label
|
||||
else:
|
||||
return self._real_heads, branch_label
|
||||
elif id_ == 'head':
|
||||
current_head = self.get_current_head(branch_label)
|
||||
if current_head:
|
||||
return (current_head, ), branch_label
|
||||
else:
|
||||
return (), branch_label
|
||||
elif id_ == 'base' or id_ is None:
|
||||
return (), branch_label
|
||||
else:
|
||||
return util.to_tuple(id_, default=None), branch_label
|
||||
|
||||
def _relative_iterate(
|
||||
self, destination, source, is_upwards,
|
||||
implicit_base, inclusive, assert_relative_length):
|
||||
if isinstance(destination, compat.string_types):
|
||||
match = _relative_destination.match(destination)
|
||||
if not match:
|
||||
return None
|
||||
else:
|
||||
return None
|
||||
|
||||
relative = int(match.group(3))
|
||||
symbol = match.group(2)
|
||||
branch_label = match.group(1)
|
||||
|
||||
reldelta = 1 if inclusive and not symbol else 0
|
||||
|
||||
if is_upwards:
|
||||
if branch_label:
|
||||
from_ = "%s@head" % branch_label
|
||||
elif symbol:
|
||||
if symbol.startswith("head"):
|
||||
from_ = symbol
|
||||
else:
|
||||
from_ = "%s@head" % symbol
|
||||
else:
|
||||
from_ = "head"
|
||||
to_ = source
|
||||
else:
|
||||
if branch_label:
|
||||
to_ = "%s@base" % branch_label
|
||||
elif symbol:
|
||||
to_ = "%s@base" % symbol
|
||||
else:
|
||||
to_ = "base"
|
||||
from_ = source
|
||||
|
||||
revs = list(
|
||||
self._iterate_revisions(
|
||||
from_, to_,
|
||||
inclusive=inclusive, implicit_base=implicit_base))
|
||||
|
||||
if symbol:
|
||||
if branch_label:
|
||||
symbol_rev = self.get_revision(
|
||||
"%s@%s" % (branch_label, symbol))
|
||||
else:
|
||||
symbol_rev = self.get_revision(symbol)
|
||||
if symbol.startswith("head"):
|
||||
index = 0
|
||||
elif symbol == "base":
|
||||
index = len(revs) - 1
|
||||
else:
|
||||
range_ = compat.range(len(revs) - 1, 0, -1)
|
||||
for index in range_:
|
||||
if symbol_rev.revision == revs[index].revision:
|
||||
break
|
||||
else:
|
||||
index = 0
|
||||
else:
|
||||
index = 0
|
||||
if is_upwards:
|
||||
revs = revs[index - relative - reldelta:]
|
||||
if not index and assert_relative_length and \
|
||||
len(revs) < abs(relative - reldelta):
|
||||
raise RevisionError(
|
||||
"Relative revision %s didn't "
|
||||
"produce %d migrations" % (destination, abs(relative)))
|
||||
else:
|
||||
revs = revs[0:index - relative + reldelta]
|
||||
if not index and assert_relative_length and \
|
||||
len(revs) != abs(relative) + reldelta:
|
||||
raise RevisionError(
|
||||
"Relative revision %s didn't "
|
||||
"produce %d migrations" % (destination, abs(relative)))
|
||||
|
||||
return iter(revs)
|
||||
|
||||
def iterate_revisions(
|
||||
self, upper, lower, implicit_base=False, inclusive=False,
|
||||
assert_relative_length=True, select_for_downgrade=False):
|
||||
"""Iterate through script revisions, starting at the given
|
||||
upper revision identifier and ending at the lower.
|
||||
|
||||
The traversal uses strictly the `down_revision`
|
||||
marker inside each migration script, so
|
||||
it is a requirement that upper >= lower,
|
||||
else you'll get nothing back.
|
||||
|
||||
The iterator yields :class:`.Revision` objects.
|
||||
|
||||
"""
|
||||
|
||||
relative_upper = self._relative_iterate(
|
||||
upper, lower, True, implicit_base,
|
||||
inclusive, assert_relative_length
|
||||
)
|
||||
if relative_upper:
|
||||
return relative_upper
|
||||
|
||||
relative_lower = self._relative_iterate(
|
||||
lower, upper, False, implicit_base,
|
||||
inclusive, assert_relative_length
|
||||
)
|
||||
if relative_lower:
|
||||
return relative_lower
|
||||
|
||||
return self._iterate_revisions(
|
||||
upper, lower, inclusive=inclusive, implicit_base=implicit_base,
|
||||
select_for_downgrade=select_for_downgrade)
|
||||
|
||||
def _get_descendant_nodes(
|
||||
self, targets, map_=None, check=False,
|
||||
omit_immediate_dependencies=False, include_dependencies=True):
|
||||
|
||||
if omit_immediate_dependencies:
|
||||
def fn(rev):
|
||||
if rev not in targets:
|
||||
return rev._all_nextrev
|
||||
else:
|
||||
return rev.nextrev
|
||||
elif include_dependencies:
|
||||
def fn(rev):
|
||||
return rev._all_nextrev
|
||||
else:
|
||||
def fn(rev):
|
||||
return rev.nextrev
|
||||
|
||||
return self._iterate_related_revisions(
|
||||
fn, targets, map_=map_, check=check
|
||||
)
|
||||
|
||||
def _get_ancestor_nodes(
|
||||
self, targets, map_=None, check=False, include_dependencies=True):
|
||||
|
||||
if include_dependencies:
|
||||
def fn(rev):
|
||||
return rev._all_down_revisions
|
||||
else:
|
||||
def fn(rev):
|
||||
return rev._versioned_down_revisions
|
||||
|
||||
return self._iterate_related_revisions(
|
||||
fn, targets, map_=map_, check=check
|
||||
)
|
||||
|
||||
def _iterate_related_revisions(self, fn, targets, map_, check=False):
|
||||
if map_ is None:
|
||||
map_ = self._revision_map
|
||||
|
||||
seen = set()
|
||||
todo = collections.deque()
|
||||
for target in targets:
|
||||
|
||||
todo.append(target)
|
||||
if check:
|
||||
per_target = set()
|
||||
|
||||
while todo:
|
||||
rev = todo.pop()
|
||||
if check:
|
||||
per_target.add(rev)
|
||||
|
||||
if rev in seen:
|
||||
continue
|
||||
seen.add(rev)
|
||||
todo.extend(
|
||||
map_[rev_id] for rev_id in fn(rev))
|
||||
yield rev
|
||||
if check:
|
||||
overlaps = per_target.intersection(targets).\
|
||||
difference([target])
|
||||
if overlaps:
|
||||
raise RevisionError(
|
||||
"Requested revision %s overlaps with "
|
||||
"other requested revisions %s" % (
|
||||
target.revision,
|
||||
", ".join(r.revision for r in overlaps)
|
||||
)
|
||||
)
|
||||
|
||||
def _iterate_revisions(
|
||||
self, upper, lower, inclusive=True, implicit_base=False,
|
||||
select_for_downgrade=False):
|
||||
"""iterate revisions from upper to lower.
|
||||
|
||||
The traversal is depth-first within branches, and breadth-first
|
||||
across branches as a whole.
|
||||
|
||||
"""
|
||||
|
||||
requested_lowers = self.get_revisions(lower)
|
||||
|
||||
# some complexity to accommodate an iteration where some
|
||||
# branches are starting from nothing, and others are starting
|
||||
# from a given point. Additionally, if the bottom branch
|
||||
# is specified using a branch identifier, then we limit operations
|
||||
# to just that branch.
|
||||
|
||||
limit_to_lower_branch = \
|
||||
isinstance(lower, compat.string_types) and lower.endswith('@base')
|
||||
|
||||
uppers = util.dedupe_tuple(self.get_revisions(upper))
|
||||
|
||||
if not uppers and not requested_lowers:
|
||||
raise StopIteration()
|
||||
|
||||
upper_ancestors = set(self._get_ancestor_nodes(uppers, check=True))
|
||||
|
||||
if limit_to_lower_branch:
|
||||
lowers = self.get_revisions(self._get_base_revisions(lower))
|
||||
elif implicit_base and requested_lowers:
|
||||
lower_ancestors = set(
|
||||
self._get_ancestor_nodes(requested_lowers)
|
||||
)
|
||||
lower_descendants = set(
|
||||
self._get_descendant_nodes(requested_lowers)
|
||||
)
|
||||
base_lowers = set()
|
||||
candidate_lowers = upper_ancestors.\
|
||||
difference(lower_ancestors).\
|
||||
difference(lower_descendants)
|
||||
for rev in candidate_lowers:
|
||||
for downrev in rev._all_down_revisions:
|
||||
if self._revision_map[downrev] in candidate_lowers:
|
||||
break
|
||||
else:
|
||||
base_lowers.add(rev)
|
||||
lowers = base_lowers.union(requested_lowers)
|
||||
elif implicit_base:
|
||||
base_lowers = set(self.get_revisions(self._real_bases))
|
||||
lowers = base_lowers.union(requested_lowers)
|
||||
elif not requested_lowers:
|
||||
lowers = set(self.get_revisions(self._real_bases))
|
||||
else:
|
||||
lowers = requested_lowers
|
||||
|
||||
# represents all nodes we will produce
|
||||
total_space = set(
|
||||
rev.revision for rev in upper_ancestors).intersection(
|
||||
rev.revision for rev
|
||||
in self._get_descendant_nodes(
|
||||
lowers, check=True,
|
||||
omit_immediate_dependencies=(
|
||||
select_for_downgrade and requested_lowers
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
if not total_space:
|
||||
# no nodes. determine if this is an invalid range
|
||||
# or not.
|
||||
start_from = set(requested_lowers)
|
||||
start_from.update(
|
||||
self._get_ancestor_nodes(
|
||||
list(start_from), include_dependencies=True)
|
||||
)
|
||||
|
||||
# determine all the current branch points represented
|
||||
# by requested_lowers
|
||||
start_from = self._filter_into_branch_heads(start_from)
|
||||
|
||||
# if the requested start is one of those branch points,
|
||||
# then just return empty set
|
||||
if start_from.intersection(upper_ancestors):
|
||||
raise StopIteration()
|
||||
else:
|
||||
# otherwise, they requested nodes out of
|
||||
# order
|
||||
raise RangeNotAncestorError(lower, upper)
|
||||
|
||||
# organize branch points to be consumed separately from
|
||||
# member nodes
|
||||
branch_todo = set(
|
||||
rev for rev in
|
||||
(self._revision_map[rev] for rev in total_space)
|
||||
if rev._is_real_branch_point and
|
||||
len(total_space.intersection(rev._all_nextrev)) > 1
|
||||
)
|
||||
|
||||
# it's not possible for any "uppers" to be in branch_todo,
|
||||
# because the ._all_nextrev of those nodes is not in total_space
|
||||
#assert not branch_todo.intersection(uppers)
|
||||
|
||||
todo = collections.deque(
|
||||
r for r in uppers
|
||||
if r.revision in total_space
|
||||
)
|
||||
|
||||
# iterate for total_space being emptied out
|
||||
total_space_modified = True
|
||||
while total_space:
|
||||
|
||||
if not total_space_modified:
|
||||
raise RevisionError(
|
||||
"Dependency resolution failed; iteration can't proceed")
|
||||
total_space_modified = False
|
||||
# when everything non-branch pending is consumed,
|
||||
# add to the todo any branch nodes that have no
|
||||
# descendants left in the queue
|
||||
if not todo:
|
||||
todo.extendleft(
|
||||
sorted(
|
||||
(
|
||||
rev for rev in branch_todo
|
||||
if not rev._all_nextrev.intersection(total_space)
|
||||
),
|
||||
# favor "revisioned" branch points before
|
||||
# dependent ones
|
||||
key=lambda rev: 0 if rev.is_branch_point else 1
|
||||
)
|
||||
)
|
||||
branch_todo.difference_update(todo)
|
||||
# iterate nodes that are in the immediate todo
|
||||
while todo:
|
||||
rev = todo.popleft()
|
||||
total_space.remove(rev.revision)
|
||||
total_space_modified = True
|
||||
|
||||
# do depth first for elements within branches,
|
||||
# don't consume any actual branch nodes
|
||||
todo.extendleft([
|
||||
self._revision_map[downrev]
|
||||
for downrev in reversed(rev._all_down_revisions)
|
||||
if self._revision_map[downrev] not in branch_todo
|
||||
and downrev in total_space])
|
||||
|
||||
if not inclusive and rev in requested_lowers:
|
||||
continue
|
||||
yield rev
|
||||
|
||||
assert not branch_todo
|
||||
|
||||
|
||||
class Revision(object):
|
||||
"""Base class for revisioned objects.
|
||||
|
||||
The :class:`.Revision` class is the base of the more public-facing
|
||||
:class:`.Script` object, which represents a migration script.
|
||||
The mechanics of revision management and traversal are encapsulated
|
||||
within :class:`.Revision`, while :class:`.Script` applies this logic
|
||||
to Python files in a version directory.
|
||||
|
||||
"""
|
||||
nextrev = frozenset()
|
||||
"""following revisions, based on down_revision only."""
|
||||
|
||||
_all_nextrev = frozenset()
|
||||
|
||||
revision = None
|
||||
"""The string revision number."""
|
||||
|
||||
down_revision = None
|
||||
"""The ``down_revision`` identifier(s) within the migration script.
|
||||
|
||||
Note that the total set of "down" revisions is
|
||||
down_revision + dependencies.
|
||||
|
||||
"""
|
||||
|
||||
dependencies = None
|
||||
"""Additional revisions which this revision is dependent on.
|
||||
|
||||
From a migration standpoint, these dependencies are added to the
|
||||
down_revision to form the full iteration. However, the separation
|
||||
of down_revision from "dependencies" is to assist in navigating
|
||||
a history that contains many branches, typically a multi-root scenario.
|
||||
|
||||
"""
|
||||
|
||||
branch_labels = None
|
||||
"""Optional string/tuple of symbolic names to apply to this
|
||||
revision's branch"""
|
||||
|
||||
def __init__(
|
||||
self, revision, down_revision,
|
||||
dependencies=None, branch_labels=None):
|
||||
self.revision = revision
|
||||
self.down_revision = tuple_rev_as_scalar(down_revision)
|
||||
self.dependencies = tuple_rev_as_scalar(dependencies)
|
||||
self._resolved_dependencies = ()
|
||||
self._orig_branch_labels = util.to_tuple(branch_labels, default=())
|
||||
self.branch_labels = set(self._orig_branch_labels)
|
||||
|
||||
def __repr__(self):
|
||||
args = [
|
||||
repr(self.revision),
|
||||
repr(self.down_revision)
|
||||
]
|
||||
if self.dependencies:
|
||||
args.append("dependencies=%r" % (self.dependencies,))
|
||||
if self.branch_labels:
|
||||
args.append("branch_labels=%r" % (self.branch_labels,))
|
||||
return "%s(%s)" % (
|
||||
self.__class__.__name__,
|
||||
", ".join(args)
|
||||
)
|
||||
|
||||
def add_nextrev(self, revision):
|
||||
self._all_nextrev = self._all_nextrev.union([revision.revision])
|
||||
if self.revision in revision._versioned_down_revisions:
|
||||
self.nextrev = self.nextrev.union([revision.revision])
|
||||
|
||||
@property
|
||||
def _all_down_revisions(self):
|
||||
return util.to_tuple(self.down_revision, default=()) + \
|
||||
self._resolved_dependencies
|
||||
|
||||
@property
|
||||
def _versioned_down_revisions(self):
|
||||
return util.to_tuple(self.down_revision, default=())
|
||||
|
||||
@property
|
||||
def is_head(self):
|
||||
"""Return True if this :class:`.Revision` is a 'head' revision.
|
||||
|
||||
This is determined based on whether any other :class:`.Script`
|
||||
within the :class:`.ScriptDirectory` refers to this
|
||||
:class:`.Script`. Multiple heads can be present.
|
||||
|
||||
"""
|
||||
return not bool(self.nextrev)
|
||||
|
||||
@property
|
||||
def _is_real_head(self):
|
||||
return not bool(self._all_nextrev)
|
||||
|
||||
@property
|
||||
def is_base(self):
|
||||
"""Return True if this :class:`.Revision` is a 'base' revision."""
|
||||
|
||||
return self.down_revision is None
|
||||
|
||||
@property
|
||||
def _is_real_base(self):
|
||||
"""Return True if this :class:`.Revision` is a "real" base revision,
|
||||
e.g. that it has no dependencies either."""
|
||||
|
||||
# we use self.dependencies here because this is called up
|
||||
# in initialization where _real_dependencies isn't set up
|
||||
# yet
|
||||
return self.down_revision is None and self.dependencies is None
|
||||
|
||||
@property
|
||||
def is_branch_point(self):
|
||||
"""Return True if this :class:`.Script` is a branch point.
|
||||
|
||||
A branchpoint is defined as a :class:`.Script` which is referred
|
||||
to by more than one succeeding :class:`.Script`, that is more
|
||||
than one :class:`.Script` has a `down_revision` identifier pointing
|
||||
here.
|
||||
|
||||
"""
|
||||
return len(self.nextrev) > 1
|
||||
|
||||
@property
|
||||
def _is_real_branch_point(self):
|
||||
"""Return True if this :class:`.Script` is a 'real' branch point,
|
||||
taking into account dependencies as well.
|
||||
|
||||
"""
|
||||
return len(self._all_nextrev) > 1
|
||||
|
||||
@property
|
||||
def is_merge_point(self):
|
||||
"""Return True if this :class:`.Script` is a merge point."""
|
||||
|
||||
return len(self._versioned_down_revisions) > 1
|
||||
|
||||
|
||||
def tuple_rev_as_scalar(rev):
|
||||
if not rev:
|
||||
return None
|
||||
elif len(rev) == 1:
|
||||
return rev[0]
|
||||
else:
|
||||
return rev
|
|
@ -1 +0,0 @@
|
|||
Generic single-database configuration.
|
|
@ -1,74 +0,0 @@
|
|||
# A generic, single database configuration.
|
||||
|
||||
[alembic]
|
||||
# path to migration scripts
|
||||
script_location = ${script_location}
|
||||
|
||||
# template used to generate migration files
|
||||
# file_template = %%(rev)s_%%(slug)s
|
||||
|
||||
# timezone to use when rendering the date
|
||||
# within the migration file as well as the filename.
|
||||
# string value is passed to dateutil.tz.gettz()
|
||||
# leave blank for localtime
|
||||
# timezone =
|
||||
|
||||
# max length of characters to apply to the
|
||||
# "slug" field
|
||||
#truncate_slug_length = 40
|
||||
|
||||
# set to 'true' to run the environment during
|
||||
# the 'revision' command, regardless of autogenerate
|
||||
# revision_environment = false
|
||||
|
||||
# set to 'true' to allow .pyc and .pyo files without
|
||||
# a source .py file to be detected as revisions in the
|
||||
# versions/ directory
|
||||
# sourceless = false
|
||||
|
||||
# version location specification; this defaults
|
||||
# to ${script_location}/versions. When using multiple version
|
||||
# directories, initial revisions must be specified with --version-path
|
||||
# version_locations = %(here)s/bar %(here)s/bat ${script_location}/versions
|
||||
|
||||
# the output encoding used when revision files
|
||||
# are written from script.py.mako
|
||||
# output_encoding = utf-8
|
||||
|
||||
sqlalchemy.url = driver://user:pass@localhost/dbname
|
||||
|
||||
|
||||
# Logging configuration
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARN
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
|
@ -1,70 +0,0 @@
|
|||
from __future__ import with_statement
|
||||
from alembic import context
|
||||
from sqlalchemy import engine_from_config, pool
|
||||
from logging.config import fileConfig
|
||||
|
||||
# this is the Alembic Config object, which provides
|
||||
# access to the values within the .ini file in use.
|
||||
config = context.config
|
||||
|
||||
# Interpret the config file for Python logging.
|
||||
# This line sets up loggers basically.
|
||||
fileConfig(config.config_file_name)
|
||||
|
||||
# add your model's MetaData object here
|
||||
# for 'autogenerate' support
|
||||
# from myapp import mymodel
|
||||
# target_metadata = mymodel.Base.metadata
|
||||
target_metadata = None
|
||||
|
||||
# other values from the config, defined by the needs of env.py,
|
||||
# can be acquired:
|
||||
# my_important_option = config.get_main_option("my_important_option")
|
||||
# ... etc.
|
||||
|
||||
|
||||
def run_migrations_offline():
|
||||
"""Run migrations in 'offline' mode.
|
||||
|
||||
This configures the context with just a URL
|
||||
and not an Engine, though an Engine is acceptable
|
||||
here as well. By skipping the Engine creation
|
||||
we don't even need a DBAPI to be available.
|
||||
|
||||
Calls to context.execute() here emit the given string to the
|
||||
script output.
|
||||
|
||||
"""
|
||||
url = config.get_main_option("sqlalchemy.url")
|
||||
context.configure(
|
||||
url=url, target_metadata=target_metadata, literal_binds=True)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
def run_migrations_online():
|
||||
"""Run migrations in 'online' mode.
|
||||
|
||||
In this scenario we need to create an Engine
|
||||
and associate a connection with the context.
|
||||
|
||||
"""
|
||||
connectable = engine_from_config(
|
||||
config.get_section(config.config_ini_section),
|
||||
prefix='sqlalchemy.',
|
||||
poolclass=pool.NullPool)
|
||||
|
||||
with connectable.connect() as connection:
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
if context.is_offline_mode():
|
||||
run_migrations_offline()
|
||||
else:
|
||||
run_migrations_online()
|
|
@ -1,24 +0,0 @@
|
|||
"""${message}
|
||||
|
||||
Revision ID: ${up_revision}
|
||||
Revises: ${down_revision | comma,n}
|
||||
Create Date: ${create_date}
|
||||
|
||||
"""
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
${imports if imports else ""}
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = ${repr(up_revision)}
|
||||
down_revision = ${repr(down_revision)}
|
||||
branch_labels = ${repr(branch_labels)}
|
||||
depends_on = ${repr(depends_on)}
|
||||
|
||||
|
||||
def upgrade():
|
||||
${upgrades if upgrades else "pass"}
|
||||
|
||||
|
||||
def downgrade():
|
||||
${downgrades if downgrades else "pass"}
|
|
@ -1 +0,0 @@
|
|||
Rudimentary multi-database configuration.
|
|
@ -1,80 +0,0 @@
|
|||
# a multi-database configuration.
|
||||
|
||||
[alembic]
|
||||
# path to migration scripts
|
||||
script_location = ${script_location}
|
||||
|
||||
# template used to generate migration files
|
||||
# file_template = %%(rev)s_%%(slug)s
|
||||
|
||||
# timezone to use when rendering the date
|
||||
# within the migration file as well as the filename.
|
||||
# string value is passed to dateutil.tz.gettz()
|
||||
# leave blank for localtime
|
||||
# timezone =
|
||||
|
||||
# max length of characters to apply to the
|
||||
# "slug" field
|
||||
#truncate_slug_length = 40
|
||||
|
||||
# set to 'true' to run the environment during
|
||||
# the 'revision' command, regardless of autogenerate
|
||||
# revision_environment = false
|
||||
|
||||
# set to 'true' to allow .pyc and .pyo files without
|
||||
# a source .py file to be detected as revisions in the
|
||||
# versions/ directory
|
||||
# sourceless = false
|
||||
|
||||
# version location specification; this defaults
|
||||
# to ${script_location}/versions. When using multiple version
|
||||
# directories, initial revisions must be specified with --version-path
|
||||
# version_locations = %(here)s/bar %(here)s/bat ${script_location}/versions
|
||||
|
||||
# the output encoding used when revision files
|
||||
# are written from script.py.mako
|
||||
# output_encoding = utf-8
|
||||
|
||||
databases = engine1, engine2
|
||||
|
||||
[engine1]
|
||||
sqlalchemy.url = driver://user:pass@localhost/dbname
|
||||
|
||||
[engine2]
|
||||
sqlalchemy.url = driver://user:pass@localhost/dbname2
|
||||
|
||||
|
||||
# Logging configuration
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARN
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
|
@ -1,133 +0,0 @@
|
|||
from __future__ import with_statement
|
||||
from alembic import context
|
||||
from sqlalchemy import engine_from_config, pool
|
||||
from logging.config import fileConfig
|
||||
import logging
|
||||
import re
|
||||
|
||||
USE_TWOPHASE = False
|
||||
|
||||
# this is the Alembic Config object, which provides
|
||||
# access to the values within the .ini file in use.
|
||||
config = context.config
|
||||
|
||||
# Interpret the config file for Python logging.
|
||||
# This line sets up loggers basically.
|
||||
fileConfig(config.config_file_name)
|
||||
logger = logging.getLogger('alembic.env')
|
||||
|
||||
# gather section names referring to different
|
||||
# databases. These are named "engine1", "engine2"
|
||||
# in the sample .ini file.
|
||||
db_names = config.get_main_option('databases')
|
||||
|
||||
# add your model's MetaData objects here
|
||||
# for 'autogenerate' support. These must be set
|
||||
# up to hold just those tables targeting a
|
||||
# particular database. table.tometadata() may be
|
||||
# helpful here in case a "copy" of
|
||||
# a MetaData is needed.
|
||||
# from myapp import mymodel
|
||||
# target_metadata = {
|
||||
# 'engine1':mymodel.metadata1,
|
||||
# 'engine2':mymodel.metadata2
|
||||
#}
|
||||
target_metadata = {}
|
||||
|
||||
# other values from the config, defined by the needs of env.py,
|
||||
# can be acquired:
|
||||
# my_important_option = config.get_main_option("my_important_option")
|
||||
# ... etc.
|
||||
|
||||
|
||||
def run_migrations_offline():
|
||||
"""Run migrations in 'offline' mode.
|
||||
|
||||
This configures the context with just a URL
|
||||
and not an Engine, though an Engine is acceptable
|
||||
here as well. By skipping the Engine creation
|
||||
we don't even need a DBAPI to be available.
|
||||
|
||||
Calls to context.execute() here emit the given string to the
|
||||
script output.
|
||||
|
||||
"""
|
||||
# for the --sql use case, run migrations for each URL into
|
||||
# individual files.
|
||||
|
||||
engines = {}
|
||||
for name in re.split(r',\s*', db_names):
|
||||
engines[name] = rec = {}
|
||||
rec['url'] = context.config.get_section_option(name,
|
||||
"sqlalchemy.url")
|
||||
|
||||
for name, rec in engines.items():
|
||||
logger.info("Migrating database %s" % name)
|
||||
file_ = "%s.sql" % name
|
||||
logger.info("Writing output to %s" % file_)
|
||||
with open(file_, 'w') as buffer:
|
||||
context.configure(url=rec['url'], output_buffer=buffer,
|
||||
target_metadata=target_metadata.get(name),
|
||||
literal_binds=True)
|
||||
with context.begin_transaction():
|
||||
context.run_migrations(engine_name=name)
|
||||
|
||||
|
||||
def run_migrations_online():
|
||||
"""Run migrations in 'online' mode.
|
||||
|
||||
In this scenario we need to create an Engine
|
||||
and associate a connection with the context.
|
||||
|
||||
"""
|
||||
|
||||
# for the direct-to-DB use case, start a transaction on all
|
||||
# engines, then run all migrations, then commit all transactions.
|
||||
|
||||
engines = {}
|
||||
for name in re.split(r',\s*', db_names):
|
||||
engines[name] = rec = {}
|
||||
rec['engine'] = engine_from_config(
|
||||
context.config.get_section(name),
|
||||
prefix='sqlalchemy.',
|
||||
poolclass=pool.NullPool)
|
||||
|
||||
for name, rec in engines.items():
|
||||
engine = rec['engine']
|
||||
rec['connection'] = conn = engine.connect()
|
||||
|
||||
if USE_TWOPHASE:
|
||||
rec['transaction'] = conn.begin_twophase()
|
||||
else:
|
||||
rec['transaction'] = conn.begin()
|
||||
|
||||
try:
|
||||
for name, rec in engines.items():
|
||||
logger.info("Migrating database %s" % name)
|
||||
context.configure(
|
||||
connection=rec['connection'],
|
||||
upgrade_token="%s_upgrades" % name,
|
||||
downgrade_token="%s_downgrades" % name,
|
||||
target_metadata=target_metadata.get(name)
|
||||
)
|
||||
context.run_migrations(engine_name=name)
|
||||
|
||||
if USE_TWOPHASE:
|
||||
for rec in engines.values():
|
||||
rec['transaction'].prepare()
|
||||
|
||||
for rec in engines.values():
|
||||
rec['transaction'].commit()
|
||||
except:
|
||||
for rec in engines.values():
|
||||
rec['transaction'].rollback()
|
||||
raise
|
||||
finally:
|
||||
for rec in engines.values():
|
||||
rec['connection'].close()
|
||||
|
||||
|
||||
if context.is_offline_mode():
|
||||
run_migrations_offline()
|
||||
else:
|
||||
run_migrations_online()
|
|
@ -1,45 +0,0 @@
|
|||
<%!
|
||||
import re
|
||||
|
||||
%>"""${message}
|
||||
|
||||
Revision ID: ${up_revision}
|
||||
Revises: ${down_revision | comma,n}
|
||||
Create Date: ${create_date}
|
||||
|
||||
"""
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
${imports if imports else ""}
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = ${repr(up_revision)}
|
||||
down_revision = ${repr(down_revision)}
|
||||
branch_labels = ${repr(branch_labels)}
|
||||
depends_on = ${repr(depends_on)}
|
||||
|
||||
|
||||
def upgrade(engine_name):
|
||||
globals()["upgrade_%s" % engine_name]()
|
||||
|
||||
|
||||
def downgrade(engine_name):
|
||||
globals()["downgrade_%s" % engine_name]()
|
||||
|
||||
<%
|
||||
db_names = config.get_main_option("databases")
|
||||
%>
|
||||
|
||||
## generate an "upgrade_<xyz>() / downgrade_<xyz>()" function
|
||||
## for each database name in the ini file.
|
||||
|
||||
% for db_name in re.split(r',\s*', db_names):
|
||||
|
||||
def upgrade_${db_name}():
|
||||
${context.get("%s_upgrades" % db_name, "pass")}
|
||||
|
||||
|
||||
def downgrade_${db_name}():
|
||||
${context.get("%s_downgrades" % db_name, "pass")}
|
||||
|
||||
% endfor
|
|
@ -1 +0,0 @@
|
|||
Configuration that reads from a Pylons project environment.
|
|
@ -1,40 +0,0 @@
|
|||
# a Pylons configuration.
|
||||
|
||||
[alembic]
|
||||
# path to migration scripts
|
||||
script_location = ${script_location}
|
||||
|
||||
# template used to generate migration files
|
||||
# file_template = %%(rev)s_%%(slug)s
|
||||
|
||||
# timezone to use when rendering the date
|
||||
# within the migration file as well as the filename.
|
||||
# string value is passed to dateutil.tz.gettz()
|
||||
# leave blank for localtime
|
||||
# timezone =
|
||||
|
||||
# max length of characters to apply to the
|
||||
# "slug" field
|
||||
#truncate_slug_length = 40
|
||||
|
||||
# set to 'true' to run the environment during
|
||||
# the 'revision' command, regardless of autogenerate
|
||||
# revision_environment = false
|
||||
|
||||
# set to 'true' to allow .pyc and .pyo files without
|
||||
# a source .py file to be detected as revisions in the
|
||||
# versions/ directory
|
||||
# sourceless = false
|
||||
|
||||
# version location specification; this defaults
|
||||
# to ${script_location}/versions. When using multiple version
|
||||
# directories, initial revisions must be specified with --version-path
|
||||
# version_locations = %(here)s/bar %(here)s/bat ${script_location}/versions
|
||||
|
||||
# the output encoding used when revision files
|
||||
# are written from script.py.mako
|
||||
# output_encoding = utf-8
|
||||
|
||||
pylons_config_file = ./development.ini
|
||||
|
||||
# that's it !
|
|
@ -1,78 +0,0 @@
|
|||
"""Pylons bootstrap environment.
|
||||
|
||||
Place 'pylons_config_file' into alembic.ini, and the application will
|
||||
be loaded from there.
|
||||
|
||||
"""
|
||||
from alembic import context
|
||||
from paste.deploy import loadapp
|
||||
from logging.config import fileConfig
|
||||
from sqlalchemy.engine.base import Engine
|
||||
|
||||
|
||||
try:
|
||||
# if pylons app already in, don't create a new app
|
||||
from pylons import config as pylons_config
|
||||
pylons_config['__file__']
|
||||
except:
|
||||
config = context.config
|
||||
# can use config['__file__'] here, i.e. the Pylons
|
||||
# ini file, instead of alembic.ini
|
||||
config_file = config.get_main_option('pylons_config_file')
|
||||
fileConfig(config_file)
|
||||
wsgi_app = loadapp('config:%s' % config_file, relative_to='.')
|
||||
|
||||
|
||||
# customize this section for non-standard engine configurations.
|
||||
meta = __import__("%s.model.meta" % wsgi_app.config['pylons.package']).model.meta
|
||||
|
||||
# add your model's MetaData object here
|
||||
# for 'autogenerate' support
|
||||
# from myapp import mymodel
|
||||
# target_metadata = mymodel.Base.metadata
|
||||
target_metadata = None
|
||||
|
||||
|
||||
def run_migrations_offline():
|
||||
"""Run migrations in 'offline' mode.
|
||||
|
||||
This configures the context with just a URL
|
||||
and not an Engine, though an Engine is acceptable
|
||||
here as well. By skipping the Engine creation
|
||||
we don't even need a DBAPI to be available.
|
||||
|
||||
Calls to context.execute() here emit the given string to the
|
||||
script output.
|
||||
|
||||
"""
|
||||
context.configure(
|
||||
url=meta.engine.url, target_metadata=target_metadata,
|
||||
literal_binds=True)
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
def run_migrations_online():
|
||||
"""Run migrations in 'online' mode.
|
||||
|
||||
In this scenario we need to create an Engine
|
||||
and associate a connection with the context.
|
||||
|
||||
"""
|
||||
# specify here how the engine is acquired
|
||||
# engine = meta.engine
|
||||
raise NotImplementedError("Please specify engine connectivity here")
|
||||
|
||||
with engine.connect() as connection:
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
if context.is_offline_mode():
|
||||
run_migrations_offline()
|
||||
else:
|
||||
run_migrations_online()
|
|
@ -1,24 +0,0 @@
|
|||
"""${message}
|
||||
|
||||
Revision ID: ${up_revision}
|
||||
Revises: ${down_revision | comma,n}
|
||||
Create Date: ${create_date}
|
||||
|
||||
"""
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
${imports if imports else ""}
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = ${repr(up_revision)}
|
||||
down_revision = ${repr(down_revision)}
|
||||
branch_labels = ${repr(branch_labels)}
|
||||
depends_on = ${repr(depends_on)}
|
||||
|
||||
|
||||
def upgrade():
|
||||
${upgrades if upgrades else "pass"}
|
||||
|
||||
|
||||
def downgrade():
|
||||
${downgrades if downgrades else "pass"}
|
|
@ -1,10 +0,0 @@
|
|||
from .fixtures import TestBase
|
||||
from .assertions import eq_, ne_, is_, is_not_, assert_raises_message, \
|
||||
eq_ignore_whitespace, assert_raises
|
||||
|
||||
from .util import provide_metadata
|
||||
|
||||
from alembic import util
|
||||
|
||||
|
||||
from .config import requirements as requires
|
|
@ -1,207 +0,0 @@
|
|||
from __future__ import absolute_import
|
||||
|
||||
|
||||
import re
|
||||
from .. import util
|
||||
from sqlalchemy.engine import default
|
||||
from ..util.compat import text_type, py3k
|
||||
import contextlib
|
||||
from sqlalchemy.util import decorator
|
||||
from sqlalchemy import exc as sa_exc
|
||||
import warnings
|
||||
from . import mock
|
||||
|
||||
|
||||
if not util.sqla_094:
|
||||
def eq_(a, b, msg=None):
|
||||
"""Assert a == b, with repr messaging on failure."""
|
||||
assert a == b, msg or "%r != %r" % (a, b)
|
||||
|
||||
def ne_(a, b, msg=None):
|
||||
"""Assert a != b, with repr messaging on failure."""
|
||||
assert a != b, msg or "%r == %r" % (a, b)
|
||||
|
||||
def is_(a, b, msg=None):
|
||||
"""Assert a is b, with repr messaging on failure."""
|
||||
assert a is b, msg or "%r is not %r" % (a, b)
|
||||
|
||||
def is_not_(a, b, msg=None):
|
||||
"""Assert a is not b, with repr messaging on failure."""
|
||||
assert a is not b, msg or "%r is %r" % (a, b)
|
||||
|
||||
def assert_raises(except_cls, callable_, *args, **kw):
|
||||
try:
|
||||
callable_(*args, **kw)
|
||||
success = False
|
||||
except except_cls:
|
||||
success = True
|
||||
|
||||
# assert outside the block so it works for AssertionError too !
|
||||
assert success, "Callable did not raise an exception"
|
||||
|
||||
def assert_raises_message(except_cls, msg, callable_, *args, **kwargs):
|
||||
try:
|
||||
callable_(*args, **kwargs)
|
||||
assert False, "Callable did not raise an exception"
|
||||
except except_cls as e:
|
||||
assert re.search(
|
||||
msg, text_type(e), re.UNICODE), "%r !~ %s" % (msg, e)
|
||||
print(text_type(e).encode('utf-8'))
|
||||
|
||||
else:
|
||||
from sqlalchemy.testing.assertions import eq_, ne_, is_, is_not_, \
|
||||
assert_raises_message, assert_raises
|
||||
|
||||
|
||||
def eq_ignore_whitespace(a, b, msg=None):
|
||||
a = re.sub(r'^\s+?|\n', "", a)
|
||||
a = re.sub(r' {2,}', " ", a)
|
||||
b = re.sub(r'^\s+?|\n', "", b)
|
||||
b = re.sub(r' {2,}', " ", b)
|
||||
|
||||
# convert for unicode string rendering,
|
||||
# using special escape character "!U"
|
||||
if py3k:
|
||||
b = re.sub(r'!U', '', b)
|
||||
else:
|
||||
b = re.sub(r'!U', 'u', b)
|
||||
|
||||
assert a == b, msg or "%r != %r" % (a, b)
|
||||
|
||||
|
||||
def assert_compiled(element, assert_string, dialect=None):
|
||||
dialect = _get_dialect(dialect)
|
||||
eq_(
|
||||
text_type(element.compile(dialect=dialect)).
|
||||
replace("\n", "").replace("\t", ""),
|
||||
assert_string.replace("\n", "").replace("\t", "")
|
||||
)
|
||||
|
||||
|
||||
_dialects = {}
|
||||
|
||||
|
||||
def _get_dialect(name):
|
||||
if name is None or name == 'default':
|
||||
return default.DefaultDialect()
|
||||
else:
|
||||
try:
|
||||
return _dialects[name]
|
||||
except KeyError:
|
||||
dialect_mod = getattr(
|
||||
__import__('sqlalchemy.dialects.%s' % name).dialects, name)
|
||||
_dialects[name] = d = dialect_mod.dialect()
|
||||
if name == 'postgresql':
|
||||
d.implicit_returning = True
|
||||
elif name == 'mssql':
|
||||
d.legacy_schema_aliasing = False
|
||||
return d
|
||||
|
||||
|
||||
def expect_warnings(*messages, **kw):
|
||||
"""Context manager which expects one or more warnings.
|
||||
|
||||
With no arguments, squelches all SAWarnings emitted via
|
||||
sqlalchemy.util.warn and sqlalchemy.util.warn_limited. Otherwise
|
||||
pass string expressions that will match selected warnings via regex;
|
||||
all non-matching warnings are sent through.
|
||||
|
||||
The expect version **asserts** that the warnings were in fact seen.
|
||||
|
||||
Note that the test suite sets SAWarning warnings to raise exceptions.
|
||||
|
||||
"""
|
||||
return _expect_warnings(sa_exc.SAWarning, messages, **kw)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def expect_warnings_on(db, *messages, **kw):
|
||||
"""Context manager which expects one or more warnings on specific
|
||||
dialects.
|
||||
|
||||
The expect version **asserts** that the warnings were in fact seen.
|
||||
|
||||
"""
|
||||
spec = db_spec(db)
|
||||
|
||||
if isinstance(db, util.string_types) and not spec(config._current):
|
||||
yield
|
||||
elif not _is_excluded(*db):
|
||||
yield
|
||||
else:
|
||||
with expect_warnings(*messages, **kw):
|
||||
yield
|
||||
|
||||
|
||||
def emits_warning(*messages):
|
||||
"""Decorator form of expect_warnings().
|
||||
|
||||
Note that emits_warning does **not** assert that the warnings
|
||||
were in fact seen.
|
||||
|
||||
"""
|
||||
|
||||
@decorator
|
||||
def decorate(fn, *args, **kw):
|
||||
with expect_warnings(assert_=False, *messages):
|
||||
return fn(*args, **kw)
|
||||
|
||||
return decorate
|
||||
|
||||
|
||||
def emits_warning_on(db, *messages):
|
||||
"""Mark a test as emitting a warning on a specific dialect.
|
||||
|
||||
With no arguments, squelches all SAWarning failures. Or pass one or more
|
||||
strings; these will be matched to the root of the warning description by
|
||||
warnings.filterwarnings().
|
||||
|
||||
Note that emits_warning_on does **not** assert that the warnings
|
||||
were in fact seen.
|
||||
|
||||
"""
|
||||
@decorator
|
||||
def decorate(fn, *args, **kw):
|
||||
with expect_warnings_on(db, *messages):
|
||||
return fn(*args, **kw)
|
||||
|
||||
return decorate
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _expect_warnings(exc_cls, messages, regex=True, assert_=True):
|
||||
|
||||
if regex:
|
||||
filters = [re.compile(msg, re.I) for msg in messages]
|
||||
else:
|
||||
filters = messages
|
||||
|
||||
seen = set(filters)
|
||||
|
||||
real_warn = warnings.warn
|
||||
|
||||
def our_warn(msg, exception=None, *arg, **kw):
|
||||
if exception and not issubclass(exception, exc_cls):
|
||||
return real_warn(msg, exception, *arg, **kw)
|
||||
|
||||
if not filters:
|
||||
return
|
||||
|
||||
for filter_ in filters:
|
||||
if (regex and filter_.match(msg)) or \
|
||||
(not regex and filter_ == msg):
|
||||
seen.discard(filter_)
|
||||
break
|
||||
else:
|
||||
if exception is None:
|
||||
real_warn(msg, *arg, **kw)
|
||||
else:
|
||||
real_warn(msg, exception, *arg, **kw)
|
||||
|
||||
with mock.patch("warnings.warn", our_warn):
|
||||
yield
|
||||
|
||||
if assert_:
|
||||
assert not seen, "Warnings were not seen: %s" % \
|
||||
", ".join("%r" % (s.pattern if regex else s) for s in seen)
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
def get_url_driver_name(url):
|
||||
if '+' not in url.drivername:
|
||||
return url.get_dialect().driver
|
||||
else:
|
||||
return url.drivername.split('+')[1]
|
||||
|
||||
|
||||
def get_url_backend_name(url):
|
||||
if '+' not in url.drivername:
|
||||
return url.drivername
|
||||
else:
|
||||
return url.drivername.split('+')[0]
|
||||
|
|
@ -1,87 +0,0 @@
|
|||
# testing/config.py
|
||||
# Copyright (C) 2005-2017 the SQLAlchemy authors and contributors
|
||||
# <see AUTHORS file>
|
||||
#
|
||||
# This module is part of SQLAlchemy and is released under
|
||||
# the MIT License: http://www.opensource.org/licenses/mit-license.php
|
||||
"""NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0
|
||||
"""
|
||||
|
||||
import collections
|
||||
|
||||
requirements = None
|
||||
db = None
|
||||
db_url = None
|
||||
db_opts = None
|
||||
file_config = None
|
||||
test_schema = None
|
||||
test_schema_2 = None
|
||||
_current = None
|
||||
|
||||
|
||||
class Config(object):
|
||||
def __init__(self, db, db_opts, options, file_config):
|
||||
self.db = db
|
||||
self.db_opts = db_opts
|
||||
self.options = options
|
||||
self.file_config = file_config
|
||||
self.test_schema = "test_schema"
|
||||
self.test_schema_2 = "test_schema_2"
|
||||
|
||||
_stack = collections.deque()
|
||||
_configs = {}
|
||||
|
||||
@classmethod
|
||||
def register(cls, db, db_opts, options, file_config):
|
||||
"""add a config as one of the global configs.
|
||||
|
||||
If there are no configs set up yet, this config also
|
||||
gets set as the "_current".
|
||||
"""
|
||||
cfg = Config(db, db_opts, options, file_config)
|
||||
|
||||
cls._configs[cfg.db.name] = cfg
|
||||
cls._configs[(cfg.db.name, cfg.db.dialect)] = cfg
|
||||
cls._configs[cfg.db] = cfg
|
||||
return cfg
|
||||
|
||||
@classmethod
|
||||
def set_as_current(cls, config):
|
||||
global db, _current, db_url, test_schema, test_schema_2, db_opts
|
||||
_current = config
|
||||
db_url = config.db.url
|
||||
db_opts = config.db_opts
|
||||
test_schema = config.test_schema
|
||||
test_schema_2 = config.test_schema_2
|
||||
db = config.db
|
||||
|
||||
@classmethod
|
||||
def push_engine(cls, db):
|
||||
assert _current, "Can't push without a default Config set up"
|
||||
cls.push(
|
||||
Config(
|
||||
db, _current.db_opts, _current.options, _current.file_config)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def push(cls, config):
|
||||
cls._stack.append(_current)
|
||||
cls.set_as_current(config)
|
||||
|
||||
@classmethod
|
||||
def reset(cls):
|
||||
if cls._stack:
|
||||
cls.set_as_current(cls._stack[0])
|
||||
cls._stack.clear()
|
||||
|
||||
@classmethod
|
||||
def all_configs(cls):
|
||||
for cfg in set(cls._configs.values()):
|
||||
yield cfg
|
||||
|
||||
@classmethod
|
||||
def all_dbs(cls):
|
||||
for cfg in cls.all_configs():
|
||||
yield cfg.db
|
||||
|
|
@ -1,28 +0,0 @@
|
|||
# testing/engines.py
|
||||
# Copyright (C) 2005-2017 the SQLAlchemy authors and contributors
|
||||
# <see AUTHORS file>
|
||||
#
|
||||
# This module is part of SQLAlchemy and is released under
|
||||
# the MIT License: http://www.opensource.org/licenses/mit-license.php
|
||||
"""NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0.
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
from . import config
|
||||
|
||||
|
||||
def testing_engine(url=None, options=None):
|
||||
"""Produce an engine configured by --options with optional overrides."""
|
||||
|
||||
from sqlalchemy import create_engine
|
||||
|
||||
url = url or config.db.url
|
||||
if options is None:
|
||||
options = config.db_opts
|
||||
|
||||
engine = create_engine(url, **options)
|
||||
|
||||
return engine
|
||||
|
|
@ -1,354 +0,0 @@
|
|||
#!coding: utf-8
|
||||
|
||||
import os
|
||||
import shutil
|
||||
import textwrap
|
||||
|
||||
from ..util.compat import u
|
||||
from ..script import Script, ScriptDirectory
|
||||
from .. import util
|
||||
from . import engines
|
||||
from . import provision
|
||||
|
||||
|
||||
def _get_staging_directory():
|
||||
if provision.FOLLOWER_IDENT:
|
||||
return "scratch_%s" % provision.FOLLOWER_IDENT
|
||||
else:
|
||||
return 'scratch'
|
||||
|
||||
|
||||
def staging_env(create=True, template="generic", sourceless=False):
|
||||
from alembic import command, script
|
||||
cfg = _testing_config()
|
||||
if create:
|
||||
path = os.path.join(_get_staging_directory(), 'scripts')
|
||||
if os.path.exists(path):
|
||||
shutil.rmtree(path)
|
||||
command.init(cfg, path, template=template)
|
||||
if sourceless:
|
||||
try:
|
||||
# do an import so that a .pyc/.pyo is generated.
|
||||
util.load_python_file(path, 'env.py')
|
||||
except AttributeError:
|
||||
# we don't have the migration context set up yet
|
||||
# so running the .env py throws this exception.
|
||||
# theoretically we could be using py_compiler here to
|
||||
# generate .pyc/.pyo without importing but not really
|
||||
# worth it.
|
||||
pass
|
||||
make_sourceless(os.path.join(path, "env.py"))
|
||||
|
||||
sc = script.ScriptDirectory.from_config(cfg)
|
||||
return sc
|
||||
|
||||
|
||||
def clear_staging_env():
|
||||
shutil.rmtree(_get_staging_directory(), True)
|
||||
|
||||
|
||||
def script_file_fixture(txt):
|
||||
dir_ = os.path.join(_get_staging_directory(), 'scripts')
|
||||
path = os.path.join(dir_, "script.py.mako")
|
||||
with open(path, 'w') as f:
|
||||
f.write(txt)
|
||||
|
||||
|
||||
def env_file_fixture(txt):
|
||||
dir_ = os.path.join(_get_staging_directory(), 'scripts')
|
||||
txt = """
|
||||
from alembic import context
|
||||
|
||||
config = context.config
|
||||
""" + txt
|
||||
|
||||
path = os.path.join(dir_, "env.py")
|
||||
pyc_path = util.pyc_file_from_path(path)
|
||||
if os.access(pyc_path, os.F_OK):
|
||||
os.unlink(pyc_path)
|
||||
|
||||
with open(path, 'w') as f:
|
||||
f.write(txt)
|
||||
|
||||
|
||||
def _sqlite_file_db(tempname="foo.db"):
|
||||
dir_ = os.path.join(_get_staging_directory(), 'scripts')
|
||||
url = "sqlite:///%s/%s" % (dir_, tempname)
|
||||
return engines.testing_engine(url=url)
|
||||
|
||||
|
||||
def _sqlite_testing_config(sourceless=False):
|
||||
dir_ = os.path.join(_get_staging_directory(), 'scripts')
|
||||
url = "sqlite:///%s/foo.db" % dir_
|
||||
|
||||
return _write_config_file("""
|
||||
[alembic]
|
||||
script_location = %s
|
||||
sqlalchemy.url = %s
|
||||
sourceless = %s
|
||||
|
||||
[loggers]
|
||||
keys = root
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %%(levelname)-5.5s [%%(name)s] %%(message)s
|
||||
datefmt = %%H:%%M:%%S
|
||||
""" % (dir_, url, "true" if sourceless else "false"))
|
||||
|
||||
|
||||
|
||||
|
||||
def _multi_dir_testing_config(sourceless=False, extra_version_location=''):
|
||||
dir_ = os.path.join(_get_staging_directory(), 'scripts')
|
||||
url = "sqlite:///%s/foo.db" % dir_
|
||||
|
||||
return _write_config_file("""
|
||||
[alembic]
|
||||
script_location = %s
|
||||
sqlalchemy.url = %s
|
||||
sourceless = %s
|
||||
version_locations = %%(here)s/model1/ %%(here)s/model2/ %%(here)s/model3/ %s
|
||||
|
||||
[loggers]
|
||||
keys = root
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %%(levelname)-5.5s [%%(name)s] %%(message)s
|
||||
datefmt = %%H:%%M:%%S
|
||||
""" % (dir_, url, "true" if sourceless else "false",
|
||||
extra_version_location))
|
||||
|
||||
|
||||
def _no_sql_testing_config(dialect="postgresql", directives=""):
|
||||
"""use a postgresql url with no host so that
|
||||
connections guaranteed to fail"""
|
||||
dir_ = os.path.join(_get_staging_directory(), 'scripts')
|
||||
return _write_config_file("""
|
||||
[alembic]
|
||||
script_location = %s
|
||||
sqlalchemy.url = %s://
|
||||
%s
|
||||
|
||||
[loggers]
|
||||
keys = root
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %%(levelname)-5.5s [%%(name)s] %%(message)s
|
||||
datefmt = %%H:%%M:%%S
|
||||
|
||||
""" % (dir_, dialect, directives))
|
||||
|
||||
|
||||
def _write_config_file(text):
|
||||
cfg = _testing_config()
|
||||
with open(cfg.config_file_name, 'w') as f:
|
||||
f.write(text)
|
||||
return cfg
|
||||
|
||||
|
||||
def _testing_config():
|
||||
from alembic.config import Config
|
||||
if not os.access(_get_staging_directory(), os.F_OK):
|
||||
os.mkdir(_get_staging_directory())
|
||||
return Config(os.path.join(_get_staging_directory(), 'test_alembic.ini'))
|
||||
|
||||
|
||||
def write_script(
|
||||
scriptdir, rev_id, content, encoding='ascii', sourceless=False):
|
||||
old = scriptdir.revision_map.get_revision(rev_id)
|
||||
path = old.path
|
||||
|
||||
content = textwrap.dedent(content)
|
||||
if encoding:
|
||||
content = content.encode(encoding)
|
||||
with open(path, 'wb') as fp:
|
||||
fp.write(content)
|
||||
pyc_path = util.pyc_file_from_path(path)
|
||||
if os.access(pyc_path, os.F_OK):
|
||||
os.unlink(pyc_path)
|
||||
script = Script._from_path(scriptdir, path)
|
||||
old = scriptdir.revision_map.get_revision(script.revision)
|
||||
if old.down_revision != script.down_revision:
|
||||
raise Exception("Can't change down_revision "
|
||||
"on a refresh operation.")
|
||||
scriptdir.revision_map.add_revision(script, _replace=True)
|
||||
|
||||
if sourceless:
|
||||
make_sourceless(path)
|
||||
|
||||
|
||||
def make_sourceless(path):
|
||||
# note that if -O is set, you'd see pyo files here,
|
||||
# the pyc util function looks at sys.flags.optimize to handle this
|
||||
pyc_path = util.pyc_file_from_path(path)
|
||||
assert os.access(pyc_path, os.F_OK)
|
||||
|
||||
# look for a non-pep3147 path here.
|
||||
# if not present, need to copy from __pycache__
|
||||
simple_pyc_path = util.simple_pyc_file_from_path(path)
|
||||
|
||||
if not os.access(simple_pyc_path, os.F_OK):
|
||||
shutil.copyfile(pyc_path, simple_pyc_path)
|
||||
os.unlink(path)
|
||||
|
||||
|
||||
def three_rev_fixture(cfg):
|
||||
a = util.rev_id()
|
||||
b = util.rev_id()
|
||||
c = util.rev_id()
|
||||
|
||||
script = ScriptDirectory.from_config(cfg)
|
||||
script.generate_revision(a, "revision a", refresh=True)
|
||||
write_script(script, a, """\
|
||||
"Rev A"
|
||||
revision = '%s'
|
||||
down_revision = None
|
||||
|
||||
from alembic import op
|
||||
|
||||
|
||||
def upgrade():
|
||||
op.execute("CREATE STEP 1")
|
||||
|
||||
|
||||
def downgrade():
|
||||
op.execute("DROP STEP 1")
|
||||
|
||||
""" % a)
|
||||
|
||||
script.generate_revision(b, "revision b", refresh=True)
|
||||
write_script(script, b, u("""# coding: utf-8
|
||||
"Rev B, méil"
|
||||
revision = '%s'
|
||||
down_revision = '%s'
|
||||
|
||||
from alembic import op
|
||||
|
||||
|
||||
def upgrade():
|
||||
op.execute("CREATE STEP 2")
|
||||
|
||||
|
||||
def downgrade():
|
||||
op.execute("DROP STEP 2")
|
||||
|
||||
""") % (b, a), encoding="utf-8")
|
||||
|
||||
script.generate_revision(c, "revision c", refresh=True)
|
||||
write_script(script, c, """\
|
||||
"Rev C"
|
||||
revision = '%s'
|
||||
down_revision = '%s'
|
||||
|
||||
from alembic import op
|
||||
|
||||
|
||||
def upgrade():
|
||||
op.execute("CREATE STEP 3")
|
||||
|
||||
|
||||
def downgrade():
|
||||
op.execute("DROP STEP 3")
|
||||
|
||||
""" % (c, b))
|
||||
return a, b, c
|
||||
|
||||
|
||||
def _multidb_testing_config(engines):
|
||||
"""alembic.ini fixture to work exactly with the 'multidb' template"""
|
||||
|
||||
dir_ = os.path.join(_get_staging_directory(), 'scripts')
|
||||
|
||||
databases = ", ".join(
|
||||
engines.keys()
|
||||
)
|
||||
engines = "\n\n".join(
|
||||
"[%s]\n"
|
||||
"sqlalchemy.url = %s" % (key, value.url)
|
||||
for key, value in engines.items()
|
||||
)
|
||||
|
||||
return _write_config_file("""
|
||||
[alembic]
|
||||
script_location = %s
|
||||
sourceless = false
|
||||
|
||||
databases = %s
|
||||
|
||||
%s
|
||||
[loggers]
|
||||
keys = root
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %%(levelname)-5.5s [%%(name)s] %%(message)s
|
||||
datefmt = %%H:%%M:%%S
|
||||
""" % (dir_, databases, engines)
|
||||
)
|
|
@ -1,447 +0,0 @@
|
|||
# testing/exclusions.py
|
||||
# Copyright (C) 2005-2017 the SQLAlchemy authors and contributors
|
||||
# <see AUTHORS file>
|
||||
#
|
||||
# This module is part of SQLAlchemy and is released under
|
||||
# the MIT License: http://www.opensource.org/licenses/mit-license.php
|
||||
"""NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0
|
||||
"""
|
||||
|
||||
|
||||
import operator
|
||||
from .plugin.plugin_base import SkipTest
|
||||
from sqlalchemy.util import decorator
|
||||
from . import config
|
||||
from sqlalchemy import util
|
||||
from ..util import compat
|
||||
import inspect
|
||||
import contextlib
|
||||
from .compat import get_url_driver_name, get_url_backend_name
|
||||
|
||||
|
||||
def skip_if(predicate, reason=None):
|
||||
rule = compound()
|
||||
pred = _as_predicate(predicate, reason)
|
||||
rule.skips.add(pred)
|
||||
return rule
|
||||
|
||||
|
||||
def fails_if(predicate, reason=None):
|
||||
rule = compound()
|
||||
pred = _as_predicate(predicate, reason)
|
||||
rule.fails.add(pred)
|
||||
return rule
|
||||
|
||||
|
||||
class compound(object):
|
||||
def __init__(self):
|
||||
self.fails = set()
|
||||
self.skips = set()
|
||||
self.tags = set()
|
||||
|
||||
def __add__(self, other):
|
||||
return self.add(other)
|
||||
|
||||
def add(self, *others):
|
||||
copy = compound()
|
||||
copy.fails.update(self.fails)
|
||||
copy.skips.update(self.skips)
|
||||
copy.tags.update(self.tags)
|
||||
for other in others:
|
||||
copy.fails.update(other.fails)
|
||||
copy.skips.update(other.skips)
|
||||
copy.tags.update(other.tags)
|
||||
return copy
|
||||
|
||||
def not_(self):
|
||||
copy = compound()
|
||||
copy.fails.update(NotPredicate(fail) for fail in self.fails)
|
||||
copy.skips.update(NotPredicate(skip) for skip in self.skips)
|
||||
copy.tags.update(self.tags)
|
||||
return copy
|
||||
|
||||
@property
|
||||
def enabled(self):
|
||||
return self.enabled_for_config(config._current)
|
||||
|
||||
def enabled_for_config(self, config):
|
||||
for predicate in self.skips.union(self.fails):
|
||||
if predicate(config):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
def matching_config_reasons(self, config):
|
||||
return [
|
||||
predicate._as_string(config) for predicate
|
||||
in self.skips.union(self.fails)
|
||||
if predicate(config)
|
||||
]
|
||||
|
||||
def include_test(self, include_tags, exclude_tags):
|
||||
return bool(
|
||||
not self.tags.intersection(exclude_tags) and
|
||||
(not include_tags or self.tags.intersection(include_tags))
|
||||
)
|
||||
|
||||
def _extend(self, other):
|
||||
self.skips.update(other.skips)
|
||||
self.fails.update(other.fails)
|
||||
self.tags.update(other.tags)
|
||||
|
||||
def __call__(self, fn):
|
||||
if hasattr(fn, '_sa_exclusion_extend'):
|
||||
fn._sa_exclusion_extend._extend(self)
|
||||
return fn
|
||||
|
||||
@decorator
|
||||
def decorate(fn, *args, **kw):
|
||||
return self._do(config._current, fn, *args, **kw)
|
||||
decorated = decorate(fn)
|
||||
decorated._sa_exclusion_extend = self
|
||||
return decorated
|
||||
|
||||
@contextlib.contextmanager
|
||||
def fail_if(self):
|
||||
all_fails = compound()
|
||||
all_fails.fails.update(self.skips.union(self.fails))
|
||||
|
||||
try:
|
||||
yield
|
||||
except Exception as ex:
|
||||
all_fails._expect_failure(config._current, ex)
|
||||
else:
|
||||
all_fails._expect_success(config._current)
|
||||
|
||||
def _do(self, config, fn, *args, **kw):
|
||||
for skip in self.skips:
|
||||
if skip(config):
|
||||
msg = "'%s' : %s" % (
|
||||
fn.__name__,
|
||||
skip._as_string(config)
|
||||
)
|
||||
raise SkipTest(msg)
|
||||
|
||||
try:
|
||||
return_value = fn(*args, **kw)
|
||||
except Exception as ex:
|
||||
self._expect_failure(config, ex, name=fn.__name__)
|
||||
else:
|
||||
self._expect_success(config, name=fn.__name__)
|
||||
return return_value
|
||||
|
||||
def _expect_failure(self, config, ex, name='block'):
|
||||
for fail in self.fails:
|
||||
if fail(config):
|
||||
print(("%s failed as expected (%s): %s " % (
|
||||
name, fail._as_string(config), str(ex))))
|
||||
break
|
||||
else:
|
||||
compat.raise_from_cause(ex)
|
||||
|
||||
def _expect_success(self, config, name='block'):
|
||||
if not self.fails:
|
||||
return
|
||||
for fail in self.fails:
|
||||
if not fail(config):
|
||||
break
|
||||
else:
|
||||
raise AssertionError(
|
||||
"Unexpected success for '%s' (%s)" %
|
||||
(
|
||||
name,
|
||||
" and ".join(
|
||||
fail._as_string(config)
|
||||
for fail in self.fails
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def requires_tag(tagname):
|
||||
return tags([tagname])
|
||||
|
||||
|
||||
def tags(tagnames):
|
||||
comp = compound()
|
||||
comp.tags.update(tagnames)
|
||||
return comp
|
||||
|
||||
|
||||
def only_if(predicate, reason=None):
|
||||
predicate = _as_predicate(predicate)
|
||||
return skip_if(NotPredicate(predicate), reason)
|
||||
|
||||
|
||||
def succeeds_if(predicate, reason=None):
|
||||
predicate = _as_predicate(predicate)
|
||||
return fails_if(NotPredicate(predicate), reason)
|
||||
|
||||
|
||||
class Predicate(object):
|
||||
@classmethod
|
||||
def as_predicate(cls, predicate, description=None):
|
||||
if isinstance(predicate, compound):
|
||||
return cls.as_predicate(predicate.fails.union(predicate.skips))
|
||||
|
||||
elif isinstance(predicate, Predicate):
|
||||
if description and predicate.description is None:
|
||||
predicate.description = description
|
||||
return predicate
|
||||
elif isinstance(predicate, (list, set)):
|
||||
return OrPredicate(
|
||||
[cls.as_predicate(pred) for pred in predicate],
|
||||
description)
|
||||
elif isinstance(predicate, tuple):
|
||||
return SpecPredicate(*predicate)
|
||||
elif isinstance(predicate, compat.string_types):
|
||||
tokens = predicate.split(" ", 2)
|
||||
op = spec = None
|
||||
db = tokens.pop(0)
|
||||
if tokens:
|
||||
op = tokens.pop(0)
|
||||
if tokens:
|
||||
spec = tuple(int(d) for d in tokens.pop(0).split("."))
|
||||
return SpecPredicate(db, op, spec, description=description)
|
||||
elif util.callable(predicate):
|
||||
return LambdaPredicate(predicate, description)
|
||||
else:
|
||||
assert False, "unknown predicate type: %s" % predicate
|
||||
|
||||
def _format_description(self, config, negate=False):
|
||||
bool_ = self(config)
|
||||
if negate:
|
||||
bool_ = not negate
|
||||
return self.description % {
|
||||
"driver": get_url_driver_name(config.db.url),
|
||||
"database": get_url_backend_name(config.db.url),
|
||||
"doesnt_support": "doesn't support" if bool_ else "does support",
|
||||
"does_support": "does support" if bool_ else "doesn't support"
|
||||
}
|
||||
|
||||
def _as_string(self, config=None, negate=False):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
class BooleanPredicate(Predicate):
|
||||
def __init__(self, value, description=None):
|
||||
self.value = value
|
||||
self.description = description or "boolean %s" % value
|
||||
|
||||
def __call__(self, config):
|
||||
return self.value
|
||||
|
||||
def _as_string(self, config, negate=False):
|
||||
return self._format_description(config, negate=negate)
|
||||
|
||||
|
||||
class SpecPredicate(Predicate):
|
||||
def __init__(self, db, op=None, spec=None, description=None):
|
||||
self.db = db
|
||||
self.op = op
|
||||
self.spec = spec
|
||||
self.description = description
|
||||
|
||||
_ops = {
|
||||
'<': operator.lt,
|
||||
'>': operator.gt,
|
||||
'==': operator.eq,
|
||||
'!=': operator.ne,
|
||||
'<=': operator.le,
|
||||
'>=': operator.ge,
|
||||
'in': operator.contains,
|
||||
'between': lambda val, pair: val >= pair[0] and val <= pair[1],
|
||||
}
|
||||
|
||||
def __call__(self, config):
|
||||
engine = config.db
|
||||
|
||||
if "+" in self.db:
|
||||
dialect, driver = self.db.split('+')
|
||||
else:
|
||||
dialect, driver = self.db, None
|
||||
|
||||
if dialect and engine.name != dialect:
|
||||
return False
|
||||
if driver is not None and engine.driver != driver:
|
||||
return False
|
||||
|
||||
if self.op is not None:
|
||||
assert driver is None, "DBAPI version specs not supported yet"
|
||||
|
||||
version = _server_version(engine)
|
||||
oper = hasattr(self.op, '__call__') and self.op \
|
||||
or self._ops[self.op]
|
||||
return oper(version, self.spec)
|
||||
else:
|
||||
return True
|
||||
|
||||
def _as_string(self, config, negate=False):
|
||||
if self.description is not None:
|
||||
return self._format_description(config)
|
||||
elif self.op is None:
|
||||
if negate:
|
||||
return "not %s" % self.db
|
||||
else:
|
||||
return "%s" % self.db
|
||||
else:
|
||||
if negate:
|
||||
return "not %s %s %s" % (
|
||||
self.db,
|
||||
self.op,
|
||||
self.spec
|
||||
)
|
||||
else:
|
||||
return "%s %s %s" % (
|
||||
self.db,
|
||||
self.op,
|
||||
self.spec
|
||||
)
|
||||
|
||||
|
||||
class LambdaPredicate(Predicate):
|
||||
def __init__(self, lambda_, description=None, args=None, kw=None):
|
||||
spec = inspect.getargspec(lambda_)
|
||||
if not spec[0]:
|
||||
self.lambda_ = lambda db: lambda_()
|
||||
else:
|
||||
self.lambda_ = lambda_
|
||||
self.args = args or ()
|
||||
self.kw = kw or {}
|
||||
if description:
|
||||
self.description = description
|
||||
elif lambda_.__doc__:
|
||||
self.description = lambda_.__doc__
|
||||
else:
|
||||
self.description = "custom function"
|
||||
|
||||
def __call__(self, config):
|
||||
return self.lambda_(config)
|
||||
|
||||
def _as_string(self, config, negate=False):
|
||||
return self._format_description(config)
|
||||
|
||||
|
||||
class NotPredicate(Predicate):
|
||||
def __init__(self, predicate, description=None):
|
||||
self.predicate = predicate
|
||||
self.description = description
|
||||
|
||||
def __call__(self, config):
|
||||
return not self.predicate(config)
|
||||
|
||||
def _as_string(self, config, negate=False):
|
||||
if self.description:
|
||||
return self._format_description(config, not negate)
|
||||
else:
|
||||
return self.predicate._as_string(config, not negate)
|
||||
|
||||
|
||||
class OrPredicate(Predicate):
|
||||
def __init__(self, predicates, description=None):
|
||||
self.predicates = predicates
|
||||
self.description = description
|
||||
|
||||
def __call__(self, config):
|
||||
for pred in self.predicates:
|
||||
if pred(config):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _eval_str(self, config, negate=False):
|
||||
if negate:
|
||||
conjunction = " and "
|
||||
else:
|
||||
conjunction = " or "
|
||||
return conjunction.join(p._as_string(config, negate=negate)
|
||||
for p in self.predicates)
|
||||
|
||||
def _negation_str(self, config):
|
||||
if self.description is not None:
|
||||
return "Not " + self._format_description(config)
|
||||
else:
|
||||
return self._eval_str(config, negate=True)
|
||||
|
||||
def _as_string(self, config, negate=False):
|
||||
if negate:
|
||||
return self._negation_str(config)
|
||||
else:
|
||||
if self.description is not None:
|
||||
return self._format_description(config)
|
||||
else:
|
||||
return self._eval_str(config)
|
||||
|
||||
|
||||
_as_predicate = Predicate.as_predicate
|
||||
|
||||
|
||||
def _is_excluded(db, op, spec):
|
||||
return SpecPredicate(db, op, spec)(config._current)
|
||||
|
||||
|
||||
def _server_version(engine):
|
||||
"""Return a server_version_info tuple."""
|
||||
|
||||
# force metadata to be retrieved
|
||||
conn = engine.connect()
|
||||
version = getattr(engine.dialect, 'server_version_info', ())
|
||||
conn.close()
|
||||
return version
|
||||
|
||||
|
||||
def db_spec(*dbs):
|
||||
return OrPredicate(
|
||||
[Predicate.as_predicate(db) for db in dbs]
|
||||
)
|
||||
|
||||
|
||||
def open():
|
||||
return skip_if(BooleanPredicate(False, "mark as execute"))
|
||||
|
||||
|
||||
def closed():
|
||||
return skip_if(BooleanPredicate(True, "marked as skip"))
|
||||
|
||||
|
||||
def fails(msg=None):
|
||||
return fails_if(BooleanPredicate(True, msg or "expected to fail"))
|
||||
|
||||
|
||||
@decorator
|
||||
def future(fn, *arg):
|
||||
return fails_if(LambdaPredicate(fn), "Future feature")
|
||||
|
||||
|
||||
def fails_on(db, reason=None):
|
||||
return fails_if(SpecPredicate(db), reason)
|
||||
|
||||
|
||||
def fails_on_everything_except(*dbs):
|
||||
return succeeds_if(
|
||||
OrPredicate([
|
||||
Predicate.as_predicate(db) for db in dbs
|
||||
])
|
||||
)
|
||||
|
||||
|
||||
def skip(db, reason=None):
|
||||
return skip_if(SpecPredicate(db), reason)
|
||||
|
||||
|
||||
def only_on(dbs, reason=None):
|
||||
return only_if(
|
||||
OrPredicate([Predicate.as_predicate(db) for db in util.to_list(dbs)])
|
||||
)
|
||||
|
||||
|
||||
def exclude(db, op, spec, reason=None):
|
||||
return skip_if(SpecPredicate(db, op, spec), reason)
|
||||
|
||||
|
||||
def against(config, *queries):
|
||||
assert queries, "no queries sent!"
|
||||
return OrPredicate([
|
||||
Predicate.as_predicate(query)
|
||||
for query in queries
|
||||
])(config)
|
|
@ -1,165 +0,0 @@
|
|||
# coding: utf-8
|
||||
import io
|
||||
import re
|
||||
|
||||
from sqlalchemy import create_engine, text, MetaData
|
||||
|
||||
import alembic
|
||||
from ..util.compat import configparser
|
||||
from .. import util
|
||||
from ..util.compat import string_types, text_type
|
||||
from ..migration import MigrationContext
|
||||
from ..environment import EnvironmentContext
|
||||
from ..operations import Operations
|
||||
from contextlib import contextmanager
|
||||
from .plugin.plugin_base import SkipTest
|
||||
from .assertions import _get_dialect, eq_
|
||||
from . import mock
|
||||
|
||||
testing_config = configparser.ConfigParser()
|
||||
testing_config.read(['test.cfg'])
|
||||
|
||||
|
||||
if not util.sqla_094:
|
||||
class TestBase(object):
|
||||
# A sequence of database names to always run, regardless of the
|
||||
# constraints below.
|
||||
__whitelist__ = ()
|
||||
|
||||
# A sequence of requirement names matching testing.requires decorators
|
||||
__requires__ = ()
|
||||
|
||||
# A sequence of dialect names to exclude from the test class.
|
||||
__unsupported_on__ = ()
|
||||
|
||||
# If present, test class is only runnable for the *single* specified
|
||||
# dialect. If you need multiple, use __unsupported_on__ and invert.
|
||||
__only_on__ = None
|
||||
|
||||
# A sequence of no-arg callables. If any are True, the entire testcase is
|
||||
# skipped.
|
||||
__skip_if__ = None
|
||||
|
||||
def assert_(self, val, msg=None):
|
||||
assert val, msg
|
||||
|
||||
# apparently a handful of tests are doing this....OK
|
||||
def setup(self):
|
||||
if hasattr(self, "setUp"):
|
||||
self.setUp()
|
||||
|
||||
def teardown(self):
|
||||
if hasattr(self, "tearDown"):
|
||||
self.tearDown()
|
||||
else:
|
||||
from sqlalchemy.testing.fixtures import TestBase
|
||||
|
||||
|
||||
def capture_db():
|
||||
buf = []
|
||||
|
||||
def dump(sql, *multiparams, **params):
|
||||
buf.append(str(sql.compile(dialect=engine.dialect)))
|
||||
engine = create_engine("postgresql://", strategy="mock", executor=dump)
|
||||
return engine, buf
|
||||
|
||||
_engs = {}
|
||||
|
||||
|
||||
@contextmanager
|
||||
def capture_context_buffer(**kw):
|
||||
if kw.pop('bytes_io', False):
|
||||
buf = io.BytesIO()
|
||||
else:
|
||||
buf = io.StringIO()
|
||||
|
||||
kw.update({
|
||||
'dialect_name': "sqlite",
|
||||
'output_buffer': buf
|
||||
})
|
||||
conf = EnvironmentContext.configure
|
||||
|
||||
def configure(*arg, **opt):
|
||||
opt.update(**kw)
|
||||
return conf(*arg, **opt)
|
||||
|
||||
with mock.patch.object(EnvironmentContext, "configure", configure):
|
||||
yield buf
|
||||
|
||||
|
||||
def op_fixture(
|
||||
dialect='default', as_sql=False,
|
||||
naming_convention=None, literal_binds=False):
|
||||
|
||||
opts = {}
|
||||
if naming_convention:
|
||||
if not util.sqla_092:
|
||||
raise SkipTest(
|
||||
"naming_convention feature requires "
|
||||
"sqla 0.9.2 or greater")
|
||||
opts['target_metadata'] = MetaData(naming_convention=naming_convention)
|
||||
|
||||
class buffer_(object):
|
||||
def __init__(self):
|
||||
self.lines = []
|
||||
|
||||
def write(self, msg):
|
||||
msg = msg.strip()
|
||||
msg = re.sub(r'[\n\t]', '', msg)
|
||||
if as_sql:
|
||||
# the impl produces soft tabs,
|
||||
# so search for blocks of 4 spaces
|
||||
msg = re.sub(r' ', '', msg)
|
||||
msg = re.sub('\;\n*$', '', msg)
|
||||
|
||||
self.lines.append(msg)
|
||||
|
||||
def flush(self):
|
||||
pass
|
||||
|
||||
buf = buffer_()
|
||||
|
||||
class ctx(MigrationContext):
|
||||
def clear_assertions(self):
|
||||
buf.lines[:] = []
|
||||
|
||||
def assert_(self, *sql):
|
||||
# TODO: make this more flexible about
|
||||
# whitespace and such
|
||||
eq_(buf.lines, list(sql))
|
||||
|
||||
def assert_contains(self, sql):
|
||||
for stmt in buf.lines:
|
||||
if sql in stmt:
|
||||
return
|
||||
else:
|
||||
assert False, "Could not locate fragment %r in %r" % (
|
||||
sql,
|
||||
buf.lines
|
||||
)
|
||||
|
||||
if as_sql:
|
||||
opts['as_sql'] = as_sql
|
||||
if literal_binds:
|
||||
opts['literal_binds'] = literal_binds
|
||||
ctx_dialect = _get_dialect(dialect)
|
||||
if not as_sql:
|
||||
def execute(stmt, *multiparam, **param):
|
||||
if isinstance(stmt, string_types):
|
||||
stmt = text(stmt)
|
||||
assert stmt.supports_execution
|
||||
sql = text_type(stmt.compile(dialect=ctx_dialect))
|
||||
|
||||
buf.write(sql)
|
||||
|
||||
connection = mock.Mock(dialect=ctx_dialect, execute=execute)
|
||||
else:
|
||||
opts['output_buffer'] = buf
|
||||
connection = None
|
||||
context = ctx(
|
||||
ctx_dialect,
|
||||
connection,
|
||||
opts)
|
||||
|
||||
alembic.op._proxy = Operations(context)
|
||||
return context
|
|
@ -1,25 +0,0 @@
|
|||
# testing/mock.py
|
||||
# Copyright (C) 2005-2017 the SQLAlchemy authors and contributors
|
||||
# <see AUTHORS file>
|
||||
#
|
||||
# This module is part of SQLAlchemy and is released under
|
||||
# the MIT License: http://www.opensource.org/licenses/mit-license.php
|
||||
|
||||
"""Import stub for mock library.
|
||||
|
||||
NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0
|
||||
|
||||
"""
|
||||
from __future__ import absolute_import
|
||||
from ..util.compat import py33
|
||||
|
||||
if py33:
|
||||
from unittest.mock import MagicMock, Mock, call, patch, ANY
|
||||
else:
|
||||
try:
|
||||
from mock import MagicMock, Mock, call, patch, ANY # noqa
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"SQLAlchemy's test suite requires the "
|
||||
"'mock' library as of 0.8.2.")
|
|
@ -1,3 +0,0 @@
|
|||
"""NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0
|
||||
"""
|
|
@ -1,44 +0,0 @@
|
|||
"""
|
||||
Bootstrapper for nose/pytest plugins.
|
||||
|
||||
The entire rationale for this system is to get the modules in plugin/
|
||||
imported without importing all of the supporting library, so that we can
|
||||
set up things for testing before coverage starts.
|
||||
|
||||
The rationale for all of plugin/ being *in* the supporting library in the
|
||||
first place is so that the testing and plugin suite is available to other
|
||||
libraries, mainly external SQLAlchemy and Alembic dialects, to make use
|
||||
of the same test environment and standard suites available to
|
||||
SQLAlchemy/Alembic themselves without the need to ship/install a separate
|
||||
package outside of SQLAlchemy.
|
||||
|
||||
NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
bootstrap_file = locals()['bootstrap_file']
|
||||
to_bootstrap = locals()['to_bootstrap']
|
||||
|
||||
|
||||
def load_file_as_module(name):
|
||||
path = os.path.join(os.path.dirname(bootstrap_file), "%s.py" % name)
|
||||
if sys.version_info >= (3, 3):
|
||||
from importlib import machinery
|
||||
mod = machinery.SourceFileLoader(name, path).load_module()
|
||||
else:
|
||||
import imp
|
||||
mod = imp.load_source(name, path)
|
||||
return mod
|
||||
|
||||
if to_bootstrap == "pytest":
|
||||
sys.modules["alembic_plugin_base"] = load_file_as_module("plugin_base")
|
||||
sys.modules["alembic_pytestplugin"] = load_file_as_module("pytestplugin")
|
||||
elif to_bootstrap == "nose":
|
||||
sys.modules["alembic_plugin_base"] = load_file_as_module("plugin_base")
|
||||
sys.modules["alembic_noseplugin"] = load_file_as_module("noseplugin")
|
||||
else:
|
||||
raise Exception("unknown bootstrap: %s" % to_bootstrap) # noqa
|
|
@ -1,103 +0,0 @@
|
|||
# plugin/noseplugin.py
|
||||
# Copyright (C) 2005-2017 the SQLAlchemy authors and contributors
|
||||
# <see AUTHORS file>
|
||||
#
|
||||
# This module is part of SQLAlchemy and is released under
|
||||
# the MIT License: http://www.opensource.org/licenses/mit-license.php
|
||||
|
||||
"""
|
||||
Enhance nose with extra options and behaviors for running SQLAlchemy tests.
|
||||
|
||||
|
||||
NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0.
|
||||
|
||||
"""
|
||||
|
||||
try:
|
||||
# installed by bootstrap.py
|
||||
import alembic_plugin_base as plugin_base
|
||||
except ImportError:
|
||||
# assume we're a package, use traditional import
|
||||
from . import plugin_base
|
||||
|
||||
import os
|
||||
import sys
|
||||
|
||||
from nose.plugins import Plugin
|
||||
fixtures = None
|
||||
|
||||
py3k = sys.version_info >= (3, 0)
|
||||
|
||||
|
||||
class NoseSQLAlchemy(Plugin):
|
||||
enabled = True
|
||||
|
||||
name = 'sqla_testing'
|
||||
score = 100
|
||||
|
||||
def options(self, parser, env=os.environ):
|
||||
Plugin.options(self, parser, env)
|
||||
opt = parser.add_option
|
||||
|
||||
def make_option(name, **kw):
|
||||
callback_ = kw.pop("callback", None)
|
||||
if callback_:
|
||||
def wrap_(option, opt_str, value, parser):
|
||||
callback_(opt_str, value, parser)
|
||||
kw["callback"] = wrap_
|
||||
opt(name, **kw)
|
||||
|
||||
plugin_base.setup_options(make_option)
|
||||
plugin_base.read_config()
|
||||
|
||||
def configure(self, options, conf):
|
||||
super(NoseSQLAlchemy, self).configure(options, conf)
|
||||
plugin_base.pre_begin(options)
|
||||
|
||||
plugin_base.set_coverage_flag(options.enable_plugin_coverage)
|
||||
|
||||
def begin(self):
|
||||
global fixtures
|
||||
from alembic.testing import fixtures # noqa
|
||||
|
||||
plugin_base.post_begin()
|
||||
|
||||
def describeTest(self, test):
|
||||
return ""
|
||||
|
||||
def wantFunction(self, fn):
|
||||
return False
|
||||
|
||||
def wantMethod(self, fn):
|
||||
if py3k:
|
||||
if not hasattr(fn.__self__, 'cls'):
|
||||
return False
|
||||
cls = fn.__self__.cls
|
||||
else:
|
||||
cls = fn.im_class
|
||||
return plugin_base.want_method(cls, fn)
|
||||
|
||||
def wantClass(self, cls):
|
||||
return plugin_base.want_class(cls)
|
||||
|
||||
def beforeTest(self, test):
|
||||
plugin_base.before_test(
|
||||
test,
|
||||
test.test.cls.__module__,
|
||||
test.test.cls, test.test.method.__name__)
|
||||
|
||||
def afterTest(self, test):
|
||||
plugin_base.after_test(test)
|
||||
|
||||
def startContext(self, ctx):
|
||||
if not isinstance(ctx, type) \
|
||||
or not issubclass(ctx, fixtures.TestBase):
|
||||
return
|
||||
plugin_base.start_test_class(ctx)
|
||||
|
||||
def stopContext(self, ctx):
|
||||
if not isinstance(ctx, type) \
|
||||
or not issubclass(ctx, fixtures.TestBase):
|
||||
return
|
||||
plugin_base.stop_test_class(ctx)
|
|
@ -1,540 +0,0 @@
|
|||
# plugin/plugin_base.py
|
||||
# Copyright (C) 2005-2017 the SQLAlchemy authors and contributors
|
||||
# <see AUTHORS file>
|
||||
#
|
||||
# This module is part of SQLAlchemy and is released under
|
||||
# the MIT License: http://www.opensource.org/licenses/mit-license.php
|
||||
"""Testing extensions.
|
||||
|
||||
this module is designed to work as a testing-framework-agnostic library,
|
||||
so that we can continue to support nose and also begin adding new
|
||||
functionality via py.test.
|
||||
|
||||
NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0
|
||||
|
||||
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
try:
|
||||
# unitttest has a SkipTest also but pytest doesn't
|
||||
# honor it unless nose is imported too...
|
||||
from nose import SkipTest
|
||||
except ImportError:
|
||||
from pytest import skip
|
||||
SkipTest = skip.Exception
|
||||
|
||||
import sys
|
||||
import re
|
||||
|
||||
py3k = sys.version_info >= (3, 0)
|
||||
|
||||
if py3k:
|
||||
import configparser
|
||||
else:
|
||||
import ConfigParser as configparser
|
||||
|
||||
# late imports
|
||||
fixtures = None
|
||||
engines = None
|
||||
provision = None
|
||||
exclusions = None
|
||||
warnings = None
|
||||
assertions = None
|
||||
requirements = None
|
||||
config = None
|
||||
util = None
|
||||
file_config = None
|
||||
|
||||
|
||||
logging = None
|
||||
include_tags = set()
|
||||
exclude_tags = set()
|
||||
options = None
|
||||
|
||||
|
||||
def setup_options(make_option):
|
||||
make_option("--log-info", action="callback", type="string", callback=_log,
|
||||
help="turn on info logging for <LOG> (multiple OK)")
|
||||
make_option("--log-debug", action="callback",
|
||||
type="string", callback=_log,
|
||||
help="turn on debug logging for <LOG> (multiple OK)")
|
||||
make_option("--db", action="append", type="string", dest="db",
|
||||
help="Use prefab database uri. Multiple OK, "
|
||||
"first one is run by default.")
|
||||
make_option('--dbs', action='callback', callback=_list_dbs,
|
||||
help="List available prefab dbs")
|
||||
make_option("--dburi", action="append", type="string", dest="dburi",
|
||||
help="Database uri. Multiple OK, "
|
||||
"first one is run by default.")
|
||||
make_option("--dropfirst", action="store_true", dest="dropfirst",
|
||||
help="Drop all tables in the target database first")
|
||||
make_option("--backend-only", action="store_true", dest="backend_only",
|
||||
help="Run only tests marked with __backend__")
|
||||
make_option("--low-connections", action="store_true",
|
||||
dest="low_connections",
|
||||
help="Use a low number of distinct connections - "
|
||||
"i.e. for Oracle TNS")
|
||||
make_option("--write-idents", type="string", dest="write_idents",
|
||||
help="write out generated follower idents to <file>, "
|
||||
"when -n<num> is used")
|
||||
make_option("--reversetop", action="store_true",
|
||||
dest="reversetop", default=False,
|
||||
help="Use a random-ordering set implementation in the ORM "
|
||||
"(helps reveal dependency issues)")
|
||||
make_option("--requirements", action="callback", type="string",
|
||||
callback=_requirements_opt,
|
||||
help="requirements class for testing, overrides setup.cfg")
|
||||
make_option("--with-cdecimal", action="store_true",
|
||||
dest="cdecimal", default=False,
|
||||
help="Monkeypatch the cdecimal library into Python 'decimal' "
|
||||
"for all tests")
|
||||
make_option("--include-tag", action="callback", callback=_include_tag,
|
||||
type="string",
|
||||
help="Include tests with tag <tag>")
|
||||
make_option("--exclude-tag", action="callback", callback=_exclude_tag,
|
||||
type="string",
|
||||
help="Exclude tests with tag <tag>")
|
||||
make_option("--mysql-engine", action="store",
|
||||
dest="mysql_engine", default=None,
|
||||
help="Use the specified MySQL storage engine for all tables, "
|
||||
"default is a db-default/InnoDB combo.")
|
||||
|
||||
|
||||
def configure_follower(follower_ident):
|
||||
"""Configure required state for a follower.
|
||||
|
||||
This invokes in the parent process and typically includes
|
||||
database creation.
|
||||
|
||||
"""
|
||||
from alembic.testing import provision
|
||||
provision.FOLLOWER_IDENT = follower_ident
|
||||
|
||||
|
||||
def memoize_important_follower_config(dict_):
|
||||
"""Store important configuration we will need to send to a follower.
|
||||
|
||||
This invokes in the parent process after normal config is set up.
|
||||
|
||||
This is necessary as py.test seems to not be using forking, so we
|
||||
start with nothing in memory, *but* it isn't running our argparse
|
||||
callables, so we have to just copy all of that over.
|
||||
|
||||
"""
|
||||
dict_['memoized_config'] = {
|
||||
'include_tags': include_tags,
|
||||
'exclude_tags': exclude_tags
|
||||
}
|
||||
|
||||
|
||||
def restore_important_follower_config(dict_):
|
||||
"""Restore important configuration needed by a follower.
|
||||
|
||||
This invokes in the follower process.
|
||||
|
||||
"""
|
||||
include_tags.update(dict_['memoized_config']['include_tags'])
|
||||
exclude_tags.update(dict_['memoized_config']['exclude_tags'])
|
||||
|
||||
|
||||
def read_config():
|
||||
global file_config
|
||||
file_config = configparser.ConfigParser()
|
||||
file_config.read(['setup.cfg', 'test.cfg'])
|
||||
|
||||
|
||||
def pre_begin(opt):
|
||||
"""things to set up early, before coverage might be setup."""
|
||||
global options
|
||||
options = opt
|
||||
for fn in pre_configure:
|
||||
fn(options, file_config)
|
||||
|
||||
|
||||
def set_coverage_flag(value):
|
||||
options.has_coverage = value
|
||||
|
||||
|
||||
def post_begin():
|
||||
"""things to set up later, once we know coverage is running."""
|
||||
|
||||
# Lazy setup of other options (post coverage)
|
||||
for fn in post_configure:
|
||||
fn(options, file_config)
|
||||
|
||||
# late imports, has to happen after config as well
|
||||
# as nose plugins like coverage
|
||||
global util, fixtures, engines, exclusions, \
|
||||
assertions, warnings, profiling,\
|
||||
config, testing
|
||||
from alembic.testing import config, warnings, exclusions # noqa
|
||||
from alembic.testing import engines, fixtures # noqa
|
||||
from sqlalchemy import util # noqa
|
||||
warnings.setup_filters()
|
||||
|
||||
|
||||
def _log(opt_str, value, parser):
|
||||
global logging
|
||||
if not logging:
|
||||
import logging
|
||||
logging.basicConfig()
|
||||
|
||||
if opt_str.endswith('-info'):
|
||||
logging.getLogger(value).setLevel(logging.INFO)
|
||||
elif opt_str.endswith('-debug'):
|
||||
logging.getLogger(value).setLevel(logging.DEBUG)
|
||||
|
||||
|
||||
def _list_dbs(*args):
|
||||
print("Available --db options (use --dburi to override)")
|
||||
for macro in sorted(file_config.options('db')):
|
||||
print("%20s\t%s" % (macro, file_config.get('db', macro)))
|
||||
sys.exit(0)
|
||||
|
||||
|
||||
def _requirements_opt(opt_str, value, parser):
|
||||
_setup_requirements(value)
|
||||
|
||||
|
||||
def _exclude_tag(opt_str, value, parser):
|
||||
exclude_tags.add(value.replace('-', '_'))
|
||||
|
||||
|
||||
def _include_tag(opt_str, value, parser):
|
||||
include_tags.add(value.replace('-', '_'))
|
||||
|
||||
pre_configure = []
|
||||
post_configure = []
|
||||
|
||||
|
||||
def pre(fn):
|
||||
pre_configure.append(fn)
|
||||
return fn
|
||||
|
||||
|
||||
def post(fn):
|
||||
post_configure.append(fn)
|
||||
return fn
|
||||
|
||||
|
||||
@pre
|
||||
def _setup_options(opt, file_config):
|
||||
global options
|
||||
options = opt
|
||||
|
||||
|
||||
|
||||
@pre
|
||||
def _monkeypatch_cdecimal(options, file_config):
|
||||
if options.cdecimal:
|
||||
import cdecimal
|
||||
sys.modules['decimal'] = cdecimal
|
||||
|
||||
|
||||
@post
|
||||
def _engine_uri(options, file_config):
|
||||
from alembic.testing import config
|
||||
from alembic.testing import provision
|
||||
|
||||
if options.dburi:
|
||||
db_urls = list(options.dburi)
|
||||
else:
|
||||
db_urls = []
|
||||
|
||||
if options.db:
|
||||
for db_token in options.db:
|
||||
for db in re.split(r'[,\s]+', db_token):
|
||||
if db not in file_config.options('db'):
|
||||
raise RuntimeError(
|
||||
"Unknown URI specifier '%s'. "
|
||||
"Specify --dbs for known uris."
|
||||
% db)
|
||||
else:
|
||||
db_urls.append(file_config.get('db', db))
|
||||
|
||||
if not db_urls:
|
||||
db_urls.append(file_config.get('db', 'default'))
|
||||
|
||||
for db_url in db_urls:
|
||||
cfg = provision.setup_config(
|
||||
db_url, options, file_config, provision.FOLLOWER_IDENT)
|
||||
|
||||
if not config._current:
|
||||
cfg.set_as_current(cfg)
|
||||
|
||||
|
||||
@post
|
||||
def _requirements(options, file_config):
|
||||
|
||||
requirement_cls = file_config.get('sqla_testing', "requirement_cls")
|
||||
_setup_requirements(requirement_cls)
|
||||
|
||||
|
||||
def _setup_requirements(argument):
|
||||
from alembic.testing import config
|
||||
|
||||
if config.requirements is not None:
|
||||
return
|
||||
|
||||
modname, clsname = argument.split(":")
|
||||
|
||||
# importlib.import_module() only introduced in 2.7, a little
|
||||
# late
|
||||
mod = __import__(modname)
|
||||
for component in modname.split(".")[1:]:
|
||||
mod = getattr(mod, component)
|
||||
req_cls = getattr(mod, clsname)
|
||||
|
||||
config.requirements = req_cls()
|
||||
|
||||
|
||||
@post
|
||||
def _prep_testing_database(options, file_config):
|
||||
from alembic.testing import config
|
||||
from alembic.testing.exclusions import against
|
||||
from sqlalchemy import schema
|
||||
from alembic import util
|
||||
|
||||
if util.sqla_08:
|
||||
from sqlalchemy import inspect
|
||||
else:
|
||||
from sqlalchemy.engine.reflection import Inspector
|
||||
inspect = Inspector.from_engine
|
||||
|
||||
if options.dropfirst:
|
||||
for cfg in config.Config.all_configs():
|
||||
e = cfg.db
|
||||
inspector = inspect(e)
|
||||
try:
|
||||
view_names = inspector.get_view_names()
|
||||
except NotImplementedError:
|
||||
pass
|
||||
else:
|
||||
for vname in view_names:
|
||||
e.execute(schema._DropView(
|
||||
schema.Table(vname, schema.MetaData())
|
||||
))
|
||||
|
||||
if config.requirements.schemas.enabled_for_config(cfg):
|
||||
try:
|
||||
view_names = inspector.get_view_names(
|
||||
schema="test_schema")
|
||||
except NotImplementedError:
|
||||
pass
|
||||
else:
|
||||
for vname in view_names:
|
||||
e.execute(schema._DropView(
|
||||
schema.Table(vname, schema.MetaData(),
|
||||
schema="test_schema")
|
||||
))
|
||||
|
||||
for tname in reversed(inspector.get_table_names(
|
||||
order_by="foreign_key")):
|
||||
e.execute(schema.DropTable(
|
||||
schema.Table(tname, schema.MetaData())
|
||||
))
|
||||
|
||||
if config.requirements.schemas.enabled_for_config(cfg):
|
||||
for tname in reversed(inspector.get_table_names(
|
||||
order_by="foreign_key", schema="test_schema")):
|
||||
e.execute(schema.DropTable(
|
||||
schema.Table(tname, schema.MetaData(),
|
||||
schema="test_schema")
|
||||
))
|
||||
|
||||
if against(cfg, "postgresql") and util.sqla_100:
|
||||
from sqlalchemy.dialects import postgresql
|
||||
for enum in inspector.get_enums("*"):
|
||||
e.execute(postgresql.DropEnumType(
|
||||
postgresql.ENUM(
|
||||
name=enum['name'],
|
||||
schema=enum['schema'])))
|
||||
|
||||
|
||||
@post
|
||||
def _reverse_topological(options, file_config):
|
||||
if options.reversetop:
|
||||
from sqlalchemy.orm.util import randomize_unitofwork
|
||||
randomize_unitofwork()
|
||||
|
||||
|
||||
@post
|
||||
def _post_setup_options(opt, file_config):
|
||||
from alembic.testing import config
|
||||
config.options = options
|
||||
config.file_config = file_config
|
||||
|
||||
|
||||
def want_class(cls):
|
||||
if not issubclass(cls, fixtures.TestBase):
|
||||
return False
|
||||
elif cls.__name__.startswith('_'):
|
||||
return False
|
||||
elif config.options.backend_only and not getattr(cls, '__backend__',
|
||||
False):
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
|
||||
def want_method(cls, fn):
|
||||
if not fn.__name__.startswith("test_"):
|
||||
return False
|
||||
elif fn.__module__ is None:
|
||||
return False
|
||||
elif include_tags:
|
||||
return (
|
||||
hasattr(cls, '__tags__') and
|
||||
exclusions.tags(cls.__tags__).include_test(
|
||||
include_tags, exclude_tags)
|
||||
) or (
|
||||
hasattr(fn, '_sa_exclusion_extend') and
|
||||
fn._sa_exclusion_extend.include_test(
|
||||
include_tags, exclude_tags)
|
||||
)
|
||||
elif exclude_tags and hasattr(cls, '__tags__'):
|
||||
return exclusions.tags(cls.__tags__).include_test(
|
||||
include_tags, exclude_tags)
|
||||
elif exclude_tags and hasattr(fn, '_sa_exclusion_extend'):
|
||||
return fn._sa_exclusion_extend.include_test(include_tags, exclude_tags)
|
||||
else:
|
||||
return True
|
||||
|
||||
|
||||
def generate_sub_tests(cls, module):
|
||||
if getattr(cls, '__backend__', False):
|
||||
for cfg in _possible_configs_for_cls(cls):
|
||||
name = "%s_%s_%s" % (cls.__name__, cfg.db.name, cfg.db.driver)
|
||||
subcls = type(
|
||||
name,
|
||||
(cls, ),
|
||||
{
|
||||
"__only_on__": ("%s+%s" % (cfg.db.name, cfg.db.driver)),
|
||||
}
|
||||
)
|
||||
setattr(module, name, subcls)
|
||||
yield subcls
|
||||
else:
|
||||
yield cls
|
||||
|
||||
|
||||
def start_test_class(cls):
|
||||
_do_skips(cls)
|
||||
_setup_engine(cls)
|
||||
|
||||
|
||||
def stop_test_class(cls):
|
||||
#from sqlalchemy import inspect
|
||||
#assert not inspect(testing.db).get_table_names()
|
||||
_restore_engine()
|
||||
|
||||
|
||||
def _restore_engine():
|
||||
config._current.reset()
|
||||
|
||||
|
||||
def _setup_engine(cls):
|
||||
if getattr(cls, '__engine_options__', None):
|
||||
eng = engines.testing_engine(options=cls.__engine_options__)
|
||||
config._current.push_engine(eng)
|
||||
|
||||
|
||||
def before_test(test, test_module_name, test_class, test_name):
|
||||
pass
|
||||
|
||||
|
||||
def after_test(test):
|
||||
pass
|
||||
|
||||
|
||||
def _possible_configs_for_cls(cls, reasons=None):
|
||||
all_configs = set(config.Config.all_configs())
|
||||
|
||||
if cls.__unsupported_on__:
|
||||
spec = exclusions.db_spec(*cls.__unsupported_on__)
|
||||
for config_obj in list(all_configs):
|
||||
if spec(config_obj):
|
||||
all_configs.remove(config_obj)
|
||||
|
||||
if getattr(cls, '__only_on__', None):
|
||||
spec = exclusions.db_spec(*util.to_list(cls.__only_on__))
|
||||
for config_obj in list(all_configs):
|
||||
if not spec(config_obj):
|
||||
all_configs.remove(config_obj)
|
||||
|
||||
if hasattr(cls, '__requires__'):
|
||||
requirements = config.requirements
|
||||
for config_obj in list(all_configs):
|
||||
for requirement in cls.__requires__:
|
||||
check = getattr(requirements, requirement)
|
||||
|
||||
skip_reasons = check.matching_config_reasons(config_obj)
|
||||
if skip_reasons:
|
||||
all_configs.remove(config_obj)
|
||||
if reasons is not None:
|
||||
reasons.extend(skip_reasons)
|
||||
break
|
||||
|
||||
if hasattr(cls, '__prefer_requires__'):
|
||||
non_preferred = set()
|
||||
requirements = config.requirements
|
||||
for config_obj in list(all_configs):
|
||||
for requirement in cls.__prefer_requires__:
|
||||
check = getattr(requirements, requirement)
|
||||
|
||||
if not check.enabled_for_config(config_obj):
|
||||
non_preferred.add(config_obj)
|
||||
if all_configs.difference(non_preferred):
|
||||
all_configs.difference_update(non_preferred)
|
||||
|
||||
return all_configs
|
||||
|
||||
|
||||
def _do_skips(cls):
|
||||
reasons = []
|
||||
all_configs = _possible_configs_for_cls(cls, reasons)
|
||||
|
||||
if getattr(cls, '__skip_if__', False):
|
||||
for c in getattr(cls, '__skip_if__'):
|
||||
if c():
|
||||
raise SkipTest("'%s' skipped by %s" % (
|
||||
cls.__name__, c.__name__)
|
||||
)
|
||||
|
||||
if not all_configs:
|
||||
if getattr(cls, '__backend__', False):
|
||||
msg = "'%s' unsupported for implementation '%s'" % (
|
||||
cls.__name__, cls.__only_on__)
|
||||
else:
|
||||
msg = "'%s' unsupported on any DB implementation %s%s" % (
|
||||
cls.__name__,
|
||||
", ".join(
|
||||
"'%s(%s)+%s'" % (
|
||||
config_obj.db.name,
|
||||
".".join(
|
||||
str(dig) for dig in
|
||||
config_obj.db.dialect.server_version_info),
|
||||
config_obj.db.driver
|
||||
)
|
||||
for config_obj in config.Config.all_configs()
|
||||
),
|
||||
", ".join(reasons)
|
||||
)
|
||||
raise SkipTest(msg)
|
||||
elif hasattr(cls, '__prefer_backends__'):
|
||||
non_preferred = set()
|
||||
spec = exclusions.db_spec(*util.to_list(cls.__prefer_backends__))
|
||||
for config_obj in all_configs:
|
||||
if not spec(config_obj):
|
||||
non_preferred.add(config_obj)
|
||||
if all_configs.difference(non_preferred):
|
||||
all_configs.difference_update(non_preferred)
|
||||
|
||||
if config._current not in all_configs:
|
||||
_setup_config(all_configs.pop(), cls)
|
||||
|
||||
|
||||
def _setup_config(config_obj, ctx):
|
||||
config._current.push(config_obj)
|
|
@ -1,194 +0,0 @@
|
|||
"""NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0.
|
||||
"""
|
||||
|
||||
try:
|
||||
# installed by bootstrap.py
|
||||
import alembic_plugin_base as plugin_base
|
||||
except ImportError:
|
||||
# assume we're a package, use traditional import
|
||||
from . import plugin_base
|
||||
|
||||
import sys
|
||||
|
||||
py3k = sys.version_info >= (3, 0)
|
||||
|
||||
import pytest
|
||||
import argparse
|
||||
import inspect
|
||||
import collections
|
||||
import os
|
||||
|
||||
try:
|
||||
import xdist # noqa
|
||||
has_xdist = True
|
||||
except ImportError:
|
||||
has_xdist = False
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("sqlalchemy")
|
||||
|
||||
def make_option(name, **kw):
|
||||
callback_ = kw.pop("callback", None)
|
||||
if callback_:
|
||||
class CallableAction(argparse.Action):
|
||||
def __call__(self, parser, namespace,
|
||||
values, option_string=None):
|
||||
callback_(option_string, values, parser)
|
||||
kw["action"] = CallableAction
|
||||
|
||||
group.addoption(name, **kw)
|
||||
|
||||
plugin_base.setup_options(make_option)
|
||||
plugin_base.read_config()
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
if hasattr(config, "slaveinput"):
|
||||
plugin_base.restore_important_follower_config(config.slaveinput)
|
||||
plugin_base.configure_follower(
|
||||
config.slaveinput["follower_ident"]
|
||||
)
|
||||
if config.option.write_idents:
|
||||
with open(config.option.write_idents, "a") as file_:
|
||||
file_.write(config.slaveinput["follower_ident"] + "\n")
|
||||
else:
|
||||
if config.option.write_idents and \
|
||||
os.path.exists(config.option.write_idents):
|
||||
os.remove(config.option.write_idents)
|
||||
|
||||
|
||||
plugin_base.pre_begin(config.option)
|
||||
|
||||
coverage = bool(getattr(config.option, "cov_source", False))
|
||||
plugin_base.set_coverage_flag(coverage)
|
||||
|
||||
|
||||
def pytest_sessionstart(session):
|
||||
plugin_base.post_begin()
|
||||
|
||||
if has_xdist:
|
||||
import uuid
|
||||
|
||||
def pytest_configure_node(node):
|
||||
# the master for each node fills slaveinput dictionary
|
||||
# which pytest-xdist will transfer to the subprocess
|
||||
|
||||
plugin_base.memoize_important_follower_config(node.slaveinput)
|
||||
|
||||
node.slaveinput["follower_ident"] = "test_%s" % uuid.uuid4().hex[0:12]
|
||||
from alembic.testing import provision
|
||||
provision.create_follower_db(node.slaveinput["follower_ident"])
|
||||
|
||||
def pytest_testnodedown(node, error):
|
||||
from alembic.testing import provision
|
||||
provision.drop_follower_db(node.slaveinput["follower_ident"])
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(session, config, items):
|
||||
# look for all those classes that specify __backend__ and
|
||||
# expand them out into per-database test cases.
|
||||
|
||||
# this is much easier to do within pytest_pycollect_makeitem, however
|
||||
# pytest is iterating through cls.__dict__ as makeitem is
|
||||
# called which causes a "dictionary changed size" error on py3k.
|
||||
# I'd submit a pullreq for them to turn it into a list first, but
|
||||
# it's to suit the rather odd use case here which is that we are adding
|
||||
# new classes to a module on the fly.
|
||||
|
||||
rebuilt_items = collections.defaultdict(list)
|
||||
items[:] = [
|
||||
item for item in
|
||||
items if isinstance(item.parent, pytest.Instance)]
|
||||
test_classes = set(item.parent for item in items)
|
||||
for test_class in test_classes:
|
||||
for sub_cls in plugin_base.generate_sub_tests(
|
||||
test_class.cls, test_class.parent.module):
|
||||
if sub_cls is not test_class.cls:
|
||||
list_ = rebuilt_items[test_class.cls]
|
||||
|
||||
for inst in pytest.Class(
|
||||
sub_cls.__name__,
|
||||
parent=test_class.parent.parent).collect():
|
||||
list_.extend(inst.collect())
|
||||
|
||||
newitems = []
|
||||
for item in items:
|
||||
if item.parent.cls in rebuilt_items:
|
||||
newitems.extend(rebuilt_items[item.parent.cls])
|
||||
rebuilt_items[item.parent.cls][:] = []
|
||||
else:
|
||||
newitems.append(item)
|
||||
|
||||
# seems like the functions attached to a test class aren't sorted already?
|
||||
# is that true and why's that? (when using unittest, they're sorted)
|
||||
items[:] = sorted(newitems, key=lambda item: (
|
||||
item.parent.parent.parent.name,
|
||||
item.parent.parent.name,
|
||||
item.name
|
||||
))
|
||||
|
||||
|
||||
def pytest_pycollect_makeitem(collector, name, obj):
|
||||
if inspect.isclass(obj) and plugin_base.want_class(obj):
|
||||
return pytest.Class(name, parent=collector)
|
||||
elif inspect.isfunction(obj) and \
|
||||
isinstance(collector, pytest.Instance) and \
|
||||
plugin_base.want_method(collector.cls, obj):
|
||||
return pytest.Function(name, parent=collector)
|
||||
else:
|
||||
return []
|
||||
|
||||
_current_class = None
|
||||
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
# here we seem to get called only based on what we collected
|
||||
# in pytest_collection_modifyitems. So to do class-based stuff
|
||||
# we have to tear that out.
|
||||
global _current_class
|
||||
|
||||
if not isinstance(item, pytest.Function):
|
||||
return
|
||||
|
||||
# ... so we're doing a little dance here to figure it out...
|
||||
if _current_class is None:
|
||||
class_setup(item.parent.parent)
|
||||
_current_class = item.parent.parent
|
||||
|
||||
# this is needed for the class-level, to ensure that the
|
||||
# teardown runs after the class is completed with its own
|
||||
# class-level teardown...
|
||||
def finalize():
|
||||
global _current_class
|
||||
class_teardown(item.parent.parent)
|
||||
_current_class = None
|
||||
item.parent.parent.addfinalizer(finalize)
|
||||
|
||||
test_setup(item)
|
||||
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
# ...but this works better as the hook here rather than
|
||||
# using a finalizer, as the finalizer seems to get in the way
|
||||
# of the test reporting failures correctly (you get a bunch of
|
||||
# py.test assertion stuff instead)
|
||||
test_teardown(item)
|
||||
|
||||
|
||||
def test_setup(item):
|
||||
plugin_base.before_test(item, item.parent.module.__name__,
|
||||
item.parent.cls, item.name)
|
||||
|
||||
|
||||
def test_teardown(item):
|
||||
plugin_base.after_test(item)
|
||||
|
||||
|
||||
def class_setup(item):
|
||||
plugin_base.start_test_class(item.cls)
|
||||
|
||||
|
||||
def class_teardown(item):
|
||||
plugin_base.stop_test_class(item.cls)
|
|
@ -1,317 +0,0 @@
|
|||
"""NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 1.0.0
|
||||
"""
|
||||
from sqlalchemy.engine import url as sa_url
|
||||
from sqlalchemy import text
|
||||
from sqlalchemy import exc
|
||||
from ..util import compat
|
||||
from . import config, engines
|
||||
from .compat import get_url_backend_name
|
||||
import os
|
||||
import time
|
||||
import logging
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
FOLLOWER_IDENT = None
|
||||
|
||||
|
||||
class register(object):
|
||||
def __init__(self):
|
||||
self.fns = {}
|
||||
|
||||
@classmethod
|
||||
def init(cls, fn):
|
||||
return register().for_db("*")(fn)
|
||||
|
||||
def for_db(self, dbname):
|
||||
def decorate(fn):
|
||||
self.fns[dbname] = fn
|
||||
return self
|
||||
return decorate
|
||||
|
||||
def __call__(self, cfg, *arg):
|
||||
if isinstance(cfg, compat.string_types):
|
||||
url = sa_url.make_url(cfg)
|
||||
elif isinstance(cfg, sa_url.URL):
|
||||
url = cfg
|
||||
else:
|
||||
url = cfg.db.url
|
||||
backend = get_url_backend_name(url)
|
||||
if backend in self.fns:
|
||||
return self.fns[backend](cfg, *arg)
|
||||
else:
|
||||
return self.fns['*'](cfg, *arg)
|
||||
|
||||
|
||||
def create_follower_db(follower_ident):
|
||||
|
||||
for cfg in _configs_for_db_operation():
|
||||
_create_db(cfg, cfg.db, follower_ident)
|
||||
|
||||
|
||||
def configure_follower(follower_ident):
|
||||
for cfg in config.Config.all_configs():
|
||||
_configure_follower(cfg, follower_ident)
|
||||
|
||||
|
||||
def setup_config(db_url, options, file_config, follower_ident):
|
||||
if follower_ident:
|
||||
db_url = _follower_url_from_main(db_url, follower_ident)
|
||||
db_opts = {}
|
||||
_update_db_opts(db_url, db_opts)
|
||||
eng = engines.testing_engine(db_url, db_opts)
|
||||
_post_configure_engine(db_url, eng, follower_ident)
|
||||
eng.connect().close()
|
||||
cfg = config.Config.register(eng, db_opts, options, file_config)
|
||||
if follower_ident:
|
||||
_configure_follower(cfg, follower_ident)
|
||||
return cfg
|
||||
|
||||
|
||||
def drop_follower_db(follower_ident):
|
||||
for cfg in _configs_for_db_operation():
|
||||
_drop_db(cfg, cfg.db, follower_ident)
|
||||
|
||||
|
||||
def _configs_for_db_operation():
|
||||
hosts = set()
|
||||
|
||||
for cfg in config.Config.all_configs():
|
||||
cfg.db.dispose()
|
||||
|
||||
for cfg in config.Config.all_configs():
|
||||
url = cfg.db.url
|
||||
backend = get_url_backend_name(url)
|
||||
host_conf = (
|
||||
backend,
|
||||
url.username, url.host, url.database)
|
||||
|
||||
if host_conf not in hosts:
|
||||
yield cfg
|
||||
hosts.add(host_conf)
|
||||
|
||||
for cfg in config.Config.all_configs():
|
||||
cfg.db.dispose()
|
||||
|
||||
|
||||
@register.init
|
||||
def _create_db(cfg, eng, ident):
|
||||
raise NotImplementedError("no DB creation routine for cfg: %s" % eng.url)
|
||||
|
||||
|
||||
@register.init
|
||||
def _drop_db(cfg, eng, ident):
|
||||
raise NotImplementedError("no DB drop routine for cfg: %s" % eng.url)
|
||||
|
||||
|
||||
@register.init
|
||||
def _update_db_opts(db_url, db_opts):
|
||||
pass
|
||||
|
||||
|
||||
@register.init
|
||||
def _configure_follower(cfg, ident):
|
||||
pass
|
||||
|
||||
|
||||
@register.init
|
||||
def _post_configure_engine(url, engine, follower_ident):
|
||||
pass
|
||||
|
||||
|
||||
@register.init
|
||||
def _follower_url_from_main(url, ident):
|
||||
url = sa_url.make_url(url)
|
||||
url.database = ident
|
||||
return url
|
||||
|
||||
|
||||
@_update_db_opts.for_db("mssql")
|
||||
def _mssql_update_db_opts(db_url, db_opts):
|
||||
db_opts['legacy_schema_aliasing'] = False
|
||||
|
||||
|
||||
@_follower_url_from_main.for_db("sqlite")
|
||||
def _sqlite_follower_url_from_main(url, ident):
|
||||
url = sa_url.make_url(url)
|
||||
if not url.database or url.database == ':memory:':
|
||||
return url
|
||||
else:
|
||||
return sa_url.make_url("sqlite:///%s.db" % ident)
|
||||
|
||||
|
||||
@_post_configure_engine.for_db("sqlite")
|
||||
def _sqlite_post_configure_engine(url, engine, follower_ident):
|
||||
from sqlalchemy import event
|
||||
|
||||
@event.listens_for(engine, "connect")
|
||||
def connect(dbapi_connection, connection_record):
|
||||
# use file DBs in all cases, memory acts kind of strangely
|
||||
# as an attached
|
||||
if not follower_ident:
|
||||
dbapi_connection.execute(
|
||||
'ATTACH DATABASE "test_schema.db" AS test_schema')
|
||||
else:
|
||||
dbapi_connection.execute(
|
||||
'ATTACH DATABASE "%s_test_schema.db" AS test_schema'
|
||||
% follower_ident)
|
||||
|
||||
|
||||
@_create_db.for_db("postgresql")
|
||||
def _pg_create_db(cfg, eng, ident):
|
||||
with eng.connect().execution_options(
|
||||
isolation_level="AUTOCOMMIT") as conn:
|
||||
try:
|
||||
_pg_drop_db(cfg, conn, ident)
|
||||
except Exception:
|
||||
pass
|
||||
currentdb = conn.scalar("select current_database()")
|
||||
for attempt in range(3):
|
||||
try:
|
||||
conn.execute(
|
||||
"CREATE DATABASE %s TEMPLATE %s" % (ident, currentdb))
|
||||
except exc.OperationalError as err:
|
||||
if attempt != 2 and "accessed by other users" in str(err):
|
||||
time.sleep(.2)
|
||||
continue
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
break
|
||||
|
||||
|
||||
@_create_db.for_db("mysql")
|
||||
def _mysql_create_db(cfg, eng, ident):
|
||||
with eng.connect() as conn:
|
||||
try:
|
||||
_mysql_drop_db(cfg, conn, ident)
|
||||
except Exception:
|
||||
pass
|
||||
conn.execute("CREATE DATABASE %s" % ident)
|
||||
conn.execute("CREATE DATABASE %s_test_schema" % ident)
|
||||
conn.execute("CREATE DATABASE %s_test_schema_2" % ident)
|
||||
|
||||
|
||||
@_configure_follower.for_db("mysql")
|
||||
def _mysql_configure_follower(config, ident):
|
||||
config.test_schema = "%s_test_schema" % ident
|
||||
config.test_schema_2 = "%s_test_schema_2" % ident
|
||||
|
||||
|
||||
@_create_db.for_db("sqlite")
|
||||
def _sqlite_create_db(cfg, eng, ident):
|
||||
pass
|
||||
|
||||
|
||||
@_drop_db.for_db("postgresql")
|
||||
def _pg_drop_db(cfg, eng, ident):
|
||||
with eng.connect().execution_options(
|
||||
isolation_level="AUTOCOMMIT") as conn:
|
||||
conn.execute(
|
||||
text(
|
||||
"select pg_terminate_backend(pid) from pg_stat_activity "
|
||||
"where usename=current_user and pid != pg_backend_pid() "
|
||||
"and datname=:dname"
|
||||
), dname=ident)
|
||||
conn.execute("DROP DATABASE %s" % ident)
|
||||
|
||||
|
||||
@_drop_db.for_db("sqlite")
|
||||
def _sqlite_drop_db(cfg, eng, ident):
|
||||
if ident:
|
||||
os.remove("%s_test_schema.db" % ident)
|
||||
else:
|
||||
os.remove("%s.db" % ident)
|
||||
|
||||
|
||||
@_drop_db.for_db("mysql")
|
||||
def _mysql_drop_db(cfg, eng, ident):
|
||||
with eng.connect() as conn:
|
||||
conn.execute("DROP DATABASE %s_test_schema" % ident)
|
||||
conn.execute("DROP DATABASE %s_test_schema_2" % ident)
|
||||
conn.execute("DROP DATABASE %s" % ident)
|
||||
|
||||
|
||||
@_create_db.for_db("oracle")
|
||||
def _oracle_create_db(cfg, eng, ident):
|
||||
# NOTE: make sure you've run "ALTER DATABASE default tablespace users" or
|
||||
# similar, so that the default tablespace is not "system"; reflection will
|
||||
# fail otherwise
|
||||
with eng.connect() as conn:
|
||||
conn.execute("create user %s identified by xe" % ident)
|
||||
conn.execute("create user %s_ts1 identified by xe" % ident)
|
||||
conn.execute("create user %s_ts2 identified by xe" % ident)
|
||||
conn.execute("grant dba to %s" % (ident, ))
|
||||
conn.execute("grant unlimited tablespace to %s" % ident)
|
||||
conn.execute("grant unlimited tablespace to %s_ts1" % ident)
|
||||
conn.execute("grant unlimited tablespace to %s_ts2" % ident)
|
||||
|
||||
@_configure_follower.for_db("oracle")
|
||||
def _oracle_configure_follower(config, ident):
|
||||
config.test_schema = "%s_ts1" % ident
|
||||
config.test_schema_2 = "%s_ts2" % ident
|
||||
|
||||
|
||||
def _ora_drop_ignore(conn, dbname):
|
||||
try:
|
||||
conn.execute("drop user %s cascade" % dbname)
|
||||
log.info("Reaped db: %s" % dbname)
|
||||
return True
|
||||
except exc.DatabaseError as err:
|
||||
log.warn("couldn't drop db: %s" % err)
|
||||
return False
|
||||
|
||||
|
||||
@_drop_db.for_db("oracle")
|
||||
def _oracle_drop_db(cfg, eng, ident):
|
||||
with eng.connect() as conn:
|
||||
# cx_Oracle seems to occasionally leak open connections when a large
|
||||
# suite it run, even if we confirm we have zero references to
|
||||
# connection objects.
|
||||
# while there is a "kill session" command in Oracle,
|
||||
# it unfortunately does not release the connection sufficiently.
|
||||
_ora_drop_ignore(conn, ident)
|
||||
_ora_drop_ignore(conn, "%s_ts1" % ident)
|
||||
_ora_drop_ignore(conn, "%s_ts2" % ident)
|
||||
|
||||
|
||||
def reap_oracle_dbs(eng, idents_file):
|
||||
log.info("Reaping Oracle dbs...")
|
||||
with eng.connect() as conn:
|
||||
with open(idents_file) as file_:
|
||||
idents = set(line.strip() for line in file_)
|
||||
|
||||
log.info("identifiers in file: %s", ", ".join(idents))
|
||||
|
||||
to_reap = conn.execute(
|
||||
"select u.username from all_users u where username "
|
||||
"like 'TEST_%' and not exists (select username "
|
||||
"from v$session where username=u.username)")
|
||||
all_names = set([username.lower() for (username, ) in to_reap])
|
||||
to_drop = set()
|
||||
for name in all_names:
|
||||
if name.endswith("_ts1") or name.endswith("_ts2"):
|
||||
continue
|
||||
elif name in idents:
|
||||
to_drop.add(name)
|
||||
if "%s_ts1" % name in all_names:
|
||||
to_drop.add("%s_ts1" % name)
|
||||
if "%s_ts2" % name in all_names:
|
||||
to_drop.add("%s_ts2" % name)
|
||||
|
||||
dropped = total = 0
|
||||
for total, username in enumerate(to_drop, 1):
|
||||
if _ora_drop_ignore(conn, username):
|
||||
dropped += 1
|
||||
log.info(
|
||||
"Dropped %d out of %d stale databases detected", dropped, total)
|
||||
|
||||
|
||||
@_follower_url_from_main.for_db("oracle")
|
||||
def _oracle_follower_url_from_main(url, ident):
|
||||
url = sa_url.make_url(url)
|
||||
url.username = ident
|
||||
url.password = 'xe'
|
||||
return url
|
||||
|
||||
|
|
@ -1,168 +0,0 @@
|
|||
from alembic import util
|
||||
|
||||
from . import exclusions
|
||||
|
||||
if util.sqla_094:
|
||||
from sqlalchemy.testing.requirements import Requirements
|
||||
else:
|
||||
class Requirements(object):
|
||||
pass
|
||||
|
||||
|
||||
class SuiteRequirements(Requirements):
|
||||
@property
|
||||
def schemas(self):
|
||||
"""Target database must support external schemas, and have one
|
||||
named 'test_schema'."""
|
||||
|
||||
return exclusions.open()
|
||||
|
||||
@property
|
||||
def unique_constraint_reflection(self):
|
||||
def doesnt_have_check_uq_constraints(config):
|
||||
if not util.sqla_084:
|
||||
return True
|
||||
from sqlalchemy import inspect
|
||||
insp = inspect(config.db)
|
||||
try:
|
||||
insp.get_unique_constraints('x')
|
||||
except NotImplementedError:
|
||||
return True
|
||||
except TypeError:
|
||||
return True
|
||||
except Exception:
|
||||
pass
|
||||
return False
|
||||
|
||||
return exclusions.skip_if(
|
||||
lambda config: not util.sqla_084,
|
||||
"SQLAlchemy 0.8.4 or greater required"
|
||||
) + exclusions.skip_if(doesnt_have_check_uq_constraints)
|
||||
|
||||
@property
|
||||
def foreign_key_match(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_08,
|
||||
"MATCH for foreign keys added in SQLAlchemy 0.8.0"
|
||||
)
|
||||
|
||||
@property
|
||||
def check_constraints_w_enforcement(self):
|
||||
"""Target database must support check constraints
|
||||
and also enforce them."""
|
||||
|
||||
return exclusions.open()
|
||||
|
||||
@property
|
||||
def reflects_pk_names(self):
|
||||
return exclusions.closed()
|
||||
|
||||
@property
|
||||
def reflects_fk_options(self):
|
||||
return exclusions.closed()
|
||||
|
||||
@property
|
||||
def fail_before_sqla_079(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_079,
|
||||
"SQLAlchemy 0.7.9 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def fail_before_sqla_080(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_08,
|
||||
"SQLAlchemy 0.8.0 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def fail_before_sqla_083(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_083,
|
||||
"SQLAlchemy 0.8.3 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def fail_before_sqla_084(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_084,
|
||||
"SQLAlchemy 0.8.4 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def fail_before_sqla_09(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_09,
|
||||
"SQLAlchemy 0.9.0 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def fail_before_sqla_100(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_100,
|
||||
"SQLAlchemy 1.0.0 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def fail_before_sqla_1010(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_1010,
|
||||
"SQLAlchemy 1.0.10 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def fail_before_sqla_099(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_099,
|
||||
"SQLAlchemy 0.9.9 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def fail_before_sqla_110(self):
|
||||
return exclusions.fails_if(
|
||||
lambda config: not util.sqla_110,
|
||||
"SQLAlchemy 1.1.0 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def sqlalchemy_08(self):
|
||||
|
||||
return exclusions.skip_if(
|
||||
lambda config: not util.sqla_08,
|
||||
"SQLAlchemy 0.8.0b2 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def sqlalchemy_09(self):
|
||||
return exclusions.skip_if(
|
||||
lambda config: not util.sqla_09,
|
||||
"SQLAlchemy 0.9.0 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def sqlalchemy_092(self):
|
||||
return exclusions.skip_if(
|
||||
lambda config: not util.sqla_092,
|
||||
"SQLAlchemy 0.9.2 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def sqlalchemy_094(self):
|
||||
return exclusions.skip_if(
|
||||
lambda config: not util.sqla_094,
|
||||
"SQLAlchemy 0.9.4 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def sqlalchemy_1014(self):
|
||||
return exclusions.skip_if(
|
||||
lambda config: not util.sqla_1014,
|
||||
"SQLAlchemy 1.0.14 or greater required"
|
||||
)
|
||||
|
||||
@property
|
||||
def sqlalchemy_110(self):
|
||||
return exclusions.skip_if(
|
||||
lambda config: not util.sqla_110,
|
||||
"SQLAlchemy 1.1.0 or greater required"
|
||||
)
|
|
@ -1,48 +0,0 @@
|
|||
#!/usr/bin/env python
|
||||
# testing/runner.py
|
||||
# Copyright (C) 2005-2017 the SQLAlchemy authors and contributors
|
||||
# <see AUTHORS file>
|
||||
#
|
||||
# This module is part of SQLAlchemy and is released under
|
||||
# the MIT License: http://www.opensource.org/licenses/mit-license.php
|
||||
"""
|
||||
Nose test runner module.
|
||||
|
||||
This script is a front-end to "nosetests" which
|
||||
installs SQLAlchemy's testing plugin into the local environment.
|
||||
|
||||
The script is intended to be used by third-party dialects and extensions
|
||||
that run within SQLAlchemy's testing framework. The runner can
|
||||
be invoked via::
|
||||
|
||||
python -m alembic.testing.runner
|
||||
|
||||
The script is then essentially the same as the "nosetests" script, including
|
||||
all of the usual Nose options. The test environment requires that a
|
||||
setup.cfg is locally present including various required options.
|
||||
|
||||
Note that when using this runner, Nose's "coverage" plugin will not be
|
||||
able to provide coverage for SQLAlchemy itself, since SQLAlchemy is
|
||||
imported into sys.modules before coverage is started. The special
|
||||
script sqla_nose.py is provided as a top-level script which loads the
|
||||
plugin in a special (somewhat hacky) way so that coverage against
|
||||
SQLAlchemy itself is possible.
|
||||
|
||||
"""
|
||||
from .plugin.noseplugin import NoseSQLAlchemy
|
||||
import nose
|
||||
|
||||
|
||||
def main():
|
||||
nose.main(addplugins=[NoseSQLAlchemy()])
|
||||
|
||||
|
||||
def setup_py_test():
|
||||
"""Runner to use for the 'test_suite' entry of your setup.py.
|
||||
|
||||
Prevents any name clash shenanigans from the command line
|
||||
argument "test" that the "setup.py test" command sends
|
||||
to nose.
|
||||
|
||||
"""
|
||||
nose.main(addplugins=[NoseSQLAlchemy()], argv=['runner'])
|
|
@ -1,19 +0,0 @@
|
|||
from sqlalchemy.util import decorator
|
||||
|
||||
|
||||
@decorator
|
||||
def provide_metadata(fn, *args, **kw):
|
||||
"""Provide bound MetaData for a single test, dropping afterwards."""
|
||||
|
||||
from . import config
|
||||
from sqlalchemy import schema
|
||||
|
||||
metadata = schema.MetaData(config.db)
|
||||
self = args[0]
|
||||
prev_meta = getattr(self, 'metadata', None)
|
||||
self.metadata = metadata
|
||||
try:
|
||||
return fn(*args, **kw)
|
||||
finally:
|
||||
metadata.drop_all()
|
||||
self.metadata = prev_meta
|
|
@ -1,43 +0,0 @@
|
|||
# testing/warnings.py
|
||||
# Copyright (C) 2005-2017 the SQLAlchemy authors and contributors
|
||||
# <see AUTHORS file>
|
||||
#
|
||||
# This module is part of SQLAlchemy and is released under
|
||||
# the MIT License: http://www.opensource.org/licenses/mit-license.php
|
||||
"""NOTE: copied/adapted from SQLAlchemy master for backwards compatibility;
|
||||
this should be removable when Alembic targets SQLAlchemy 0.9.4.
|
||||
"""
|
||||
|
||||
from __future__ import absolute_import
|
||||
|
||||
import warnings
|
||||
from sqlalchemy import exc as sa_exc
|
||||
import re
|
||||
|
||||
|
||||
def setup_filters():
|
||||
"""Set global warning behavior for the test suite."""
|
||||
|
||||
warnings.filterwarnings('ignore',
|
||||
category=sa_exc.SAPendingDeprecationWarning)
|
||||
warnings.filterwarnings('error', category=sa_exc.SADeprecationWarning)
|
||||
warnings.filterwarnings('error', category=sa_exc.SAWarning)
|
||||
|
||||
|
||||
def assert_warnings(fn, warning_msgs, regex=False):
|
||||
"""Assert that each of the given warnings are emitted by fn."""
|
||||
|
||||
from .assertions import eq_
|
||||
|
||||
with warnings.catch_warnings(record=True) as log:
|
||||
# ensure that nothing is going into __warningregistry__
|
||||
warnings.filterwarnings("always")
|
||||
|
||||
result = fn()
|
||||
for warning in log:
|
||||
popwarn = warning_msgs.pop(0)
|
||||
if regex:
|
||||
assert re.match(popwarn, str(warning.message))
|
||||
else:
|
||||
eq_(popwarn, str(warning.message))
|
||||
return result
|
|
@ -1,17 +0,0 @@
|
|||
from .langhelpers import ( # noqa
|
||||
asbool, rev_id, to_tuple, to_list, memoized_property, dedupe_tuple,
|
||||
immutabledict, _with_legacy_names, Dispatcher, ModuleClsProxy)
|
||||
from .messaging import ( # noqa
|
||||
write_outstream, status, err, obfuscate_url_pw, warn, msg, format_as_comma)
|
||||
from .pyfiles import ( # noqa
|
||||
template_to_file, coerce_resource_to_filename, simple_pyc_file_from_path,
|
||||
pyc_file_from_path, load_python_file, edit)
|
||||
from .sqla_compat import ( # noqa
|
||||
sqla_07, sqla_079, sqla_08, sqla_083, sqla_084, sqla_09, sqla_092,
|
||||
sqla_094, sqla_099, sqla_100, sqla_105, sqla_110, sqla_1010, sqla_1014)
|
||||
from .exc import CommandError
|
||||
|
||||
|
||||
if not sqla_07:
|
||||
raise CommandError(
|
||||
"SQLAlchemy 0.7.3 or greater is required. ")
|
|
@ -1,175 +0,0 @@
|
|||
import io
|
||||
import sys
|
||||
|
||||
if sys.version_info < (2, 6):
|
||||
raise NotImplementedError("Python 2.6 or greater is required.")
|
||||
|
||||
py27 = sys.version_info >= (2, 7)
|
||||
py2k = sys.version_info < (3, 0)
|
||||
py3k = sys.version_info >= (3, 0)
|
||||
py33 = sys.version_info >= (3, 3)
|
||||
|
||||
if py3k:
|
||||
from io import StringIO
|
||||
else:
|
||||
# accepts strings
|
||||
from StringIO import StringIO
|
||||
|
||||
if py3k:
|
||||
import builtins as compat_builtins
|
||||
string_types = str,
|
||||
binary_type = bytes
|
||||
text_type = str
|
||||
|
||||
def callable(fn):
|
||||
return hasattr(fn, '__call__')
|
||||
|
||||
def u(s):
|
||||
return s
|
||||
|
||||
def ue(s):
|
||||
return s
|
||||
|
||||
range = range
|
||||
else:
|
||||
import __builtin__ as compat_builtins
|
||||
string_types = basestring,
|
||||
binary_type = str
|
||||
text_type = unicode
|
||||
callable = callable
|
||||
|
||||
def u(s):
|
||||
return unicode(s, "utf-8")
|
||||
|
||||
def ue(s):
|
||||
return unicode(s, "unicode_escape")
|
||||
|
||||
range = xrange
|
||||
|
||||
if py3k:
|
||||
from configparser import ConfigParser as SafeConfigParser
|
||||
import configparser
|
||||
else:
|
||||
from ConfigParser import SafeConfigParser
|
||||
import ConfigParser as configparser
|
||||
|
||||
if py2k:
|
||||
from mako.util import parse_encoding
|
||||
|
||||
if py33:
|
||||
from importlib import machinery
|
||||
|
||||
def load_module_py(module_id, path):
|
||||
return machinery.SourceFileLoader(
|
||||
module_id, path).load_module(module_id)
|
||||
|
||||
def load_module_pyc(module_id, path):
|
||||
return machinery.SourcelessFileLoader(
|
||||
module_id, path).load_module(module_id)
|
||||
|
||||
else:
|
||||
import imp
|
||||
|
||||
def load_module_py(module_id, path):
|
||||
with open(path, 'rb') as fp:
|
||||
mod = imp.load_source(module_id, path, fp)
|
||||
if py2k:
|
||||
source_encoding = parse_encoding(fp)
|
||||
if source_encoding:
|
||||
mod._alembic_source_encoding = source_encoding
|
||||
return mod
|
||||
|
||||
def load_module_pyc(module_id, path):
|
||||
with open(path, 'rb') as fp:
|
||||
mod = imp.load_compiled(module_id, path, fp)
|
||||
# no source encoding here
|
||||
return mod
|
||||
|
||||
try:
|
||||
exec_ = getattr(compat_builtins, 'exec')
|
||||
except AttributeError:
|
||||
# Python 2
|
||||
def exec_(func_text, globals_, lcl):
|
||||
exec('exec func_text in globals_, lcl')
|
||||
|
||||
################################################
|
||||
# cross-compatible metaclass implementation
|
||||
# Copyright (c) 2010-2012 Benjamin Peterson
|
||||
|
||||
|
||||
def with_metaclass(meta, base=object):
|
||||
"""Create a base class with a metaclass."""
|
||||
return meta("%sBase" % meta.__name__, (base,), {})
|
||||
################################################
|
||||
|
||||
if py3k:
|
||||
def reraise(tp, value, tb=None, cause=None):
|
||||
if cause is not None:
|
||||
value.__cause__ = cause
|
||||
if value.__traceback__ is not tb:
|
||||
raise value.with_traceback(tb)
|
||||
raise value
|
||||
|
||||
def raise_from_cause(exception, exc_info=None):
|
||||
if exc_info is None:
|
||||
exc_info = sys.exc_info()
|
||||
exc_type, exc_value, exc_tb = exc_info
|
||||
reraise(type(exception), exception, tb=exc_tb, cause=exc_value)
|
||||
else:
|
||||
exec("def reraise(tp, value, tb=None, cause=None):\n"
|
||||
" raise tp, value, tb\n")
|
||||
|
||||
def raise_from_cause(exception, exc_info=None):
|
||||
# not as nice as that of Py3K, but at least preserves
|
||||
# the code line where the issue occurred
|
||||
if exc_info is None:
|
||||
exc_info = sys.exc_info()
|
||||
exc_type, exc_value, exc_tb = exc_info
|
||||
reraise(type(exception), exception, tb=exc_tb)
|
||||
|
||||
# produce a wrapper that allows encoded text to stream
|
||||
# into a given buffer, but doesn't close it.
|
||||
# not sure of a more idiomatic approach to this.
|
||||
class EncodedIO(io.TextIOWrapper):
|
||||
|
||||
def close(self):
|
||||
pass
|
||||
|
||||
if py2k:
|
||||
# in Py2K, the io.* package is awkward because it does not
|
||||
# easily wrap the file type (e.g. sys.stdout) and I can't
|
||||
# figure out at all how to wrap StringIO.StringIO (used by nosetests)
|
||||
# and also might be user specified too. So create a full
|
||||
# adapter.
|
||||
|
||||
class ActLikePy3kIO(object):
|
||||
|
||||
"""Produce an object capable of wrapping either
|
||||
sys.stdout (e.g. file) *or* StringIO.StringIO().
|
||||
|
||||
"""
|
||||
|
||||
def _false(self):
|
||||
return False
|
||||
|
||||
def _true(self):
|
||||
return True
|
||||
|
||||
readable = seekable = _false
|
||||
writable = _true
|
||||
closed = False
|
||||
|
||||
def __init__(self, file_):
|
||||
self.file_ = file_
|
||||
|
||||
def write(self, text):
|
||||
return self.file_.write(text)
|
||||
|
||||
def flush(self):
|
||||
return self.file_.flush()
|
||||
|
||||
class EncodedIO(EncodedIO):
|
||||
|
||||
def __init__(self, file_, encoding):
|
||||
super(EncodedIO, self).__init__(
|
||||
ActLikePy3kIO(file_), encoding=encoding)
|
|
@ -1,2 +0,0 @@
|
|||
class CommandError(Exception):
|
||||
pass
|
|
@ -1,331 +0,0 @@
|
|||
import textwrap
|
||||
import warnings
|
||||
import inspect
|
||||
import uuid
|
||||
import collections
|
||||
|
||||
from .compat import callable, exec_, string_types, with_metaclass
|
||||
|
||||
from sqlalchemy.util import format_argspec_plus, update_wrapper
|
||||
from sqlalchemy.util.compat import inspect_getfullargspec
|
||||
|
||||
|
||||
class _ModuleClsMeta(type):
|
||||
def __setattr__(cls, key, value):
|
||||
super(_ModuleClsMeta, cls).__setattr__(key, value)
|
||||
cls._update_module_proxies(key)
|
||||
|
||||
|
||||
class ModuleClsProxy(with_metaclass(_ModuleClsMeta)):
|
||||
"""Create module level proxy functions for the
|
||||
methods on a given class.
|
||||
|
||||
The functions will have a compatible signature
|
||||
as the methods.
|
||||
|
||||
"""
|
||||
|
||||
_setups = collections.defaultdict(lambda: (set(), []))
|
||||
|
||||
@classmethod
|
||||
def _update_module_proxies(cls, name):
|
||||
attr_names, modules = cls._setups[cls]
|
||||
for globals_, locals_ in modules:
|
||||
cls._add_proxied_attribute(name, globals_, locals_, attr_names)
|
||||
|
||||
def _install_proxy(self):
|
||||
attr_names, modules = self._setups[self.__class__]
|
||||
for globals_, locals_ in modules:
|
||||
globals_['_proxy'] = self
|
||||
for attr_name in attr_names:
|
||||
globals_[attr_name] = getattr(self, attr_name)
|
||||
|
||||
def _remove_proxy(self):
|
||||
attr_names, modules = self._setups[self.__class__]
|
||||
for globals_, locals_ in modules:
|
||||
globals_['_proxy'] = None
|
||||
for attr_name in attr_names:
|
||||
del globals_[attr_name]
|
||||
|
||||
@classmethod
|
||||
def create_module_class_proxy(cls, globals_, locals_):
|
||||
attr_names, modules = cls._setups[cls]
|
||||
modules.append(
|
||||
(globals_, locals_)
|
||||
)
|
||||
cls._setup_proxy(globals_, locals_, attr_names)
|
||||
|
||||
@classmethod
|
||||
def _setup_proxy(cls, globals_, locals_, attr_names):
|
||||
for methname in dir(cls):
|
||||
cls._add_proxied_attribute(methname, globals_, locals_, attr_names)
|
||||
|
||||
@classmethod
|
||||
def _add_proxied_attribute(cls, methname, globals_, locals_, attr_names):
|
||||
if not methname.startswith('_'):
|
||||
meth = getattr(cls, methname)
|
||||
if callable(meth):
|
||||
locals_[methname] = cls._create_method_proxy(
|
||||
methname, globals_, locals_)
|
||||
else:
|
||||
attr_names.add(methname)
|
||||
|
||||
@classmethod
|
||||
def _create_method_proxy(cls, name, globals_, locals_):
|
||||
fn = getattr(cls, name)
|
||||
spec = inspect.getargspec(fn)
|
||||
if spec[0] and spec[0][0] == 'self':
|
||||
spec[0].pop(0)
|
||||
args = inspect.formatargspec(*spec)
|
||||
num_defaults = 0
|
||||
if spec[3]:
|
||||
num_defaults += len(spec[3])
|
||||
name_args = spec[0]
|
||||
if num_defaults:
|
||||
defaulted_vals = name_args[0 - num_defaults:]
|
||||
else:
|
||||
defaulted_vals = ()
|
||||
|
||||
apply_kw = inspect.formatargspec(
|
||||
name_args, spec[1], spec[2],
|
||||
defaulted_vals,
|
||||
formatvalue=lambda x: '=' + x)
|
||||
|
||||
def _name_error(name):
|
||||
raise NameError(
|
||||
"Can't invoke function '%s', as the proxy object has "
|
||||
"not yet been "
|
||||
"established for the Alembic '%s' class. "
|
||||
"Try placing this code inside a callable." % (
|
||||
name, cls.__name__
|
||||
))
|
||||
globals_['_name_error'] = _name_error
|
||||
|
||||
translations = getattr(fn, "_legacy_translations", [])
|
||||
if translations:
|
||||
outer_args = inner_args = "*args, **kw"
|
||||
translate_str = "args, kw = _translate(%r, %r, %r, args, kw)" % (
|
||||
fn.__name__,
|
||||
tuple(spec),
|
||||
translations
|
||||
)
|
||||
|
||||
def translate(fn_name, spec, translations, args, kw):
|
||||
return_kw = {}
|
||||
return_args = []
|
||||
|
||||
for oldname, newname in translations:
|
||||
if oldname in kw:
|
||||
warnings.warn(
|
||||
"Argument %r is now named %r "
|
||||
"for method %s()." % (
|
||||
oldname, newname, fn_name
|
||||
))
|
||||
return_kw[newname] = kw.pop(oldname)
|
||||
return_kw.update(kw)
|
||||
|
||||
args = list(args)
|
||||
if spec[3]:
|
||||
pos_only = spec[0][:-len(spec[3])]
|
||||
else:
|
||||
pos_only = spec[0]
|
||||
for arg in pos_only:
|
||||
if arg not in return_kw:
|
||||
try:
|
||||
return_args.append(args.pop(0))
|
||||
except IndexError:
|
||||
raise TypeError(
|
||||
"missing required positional argument: %s"
|
||||
% arg)
|
||||
return_args.extend(args)
|
||||
|
||||
return return_args, return_kw
|
||||
globals_['_translate'] = translate
|
||||
else:
|
||||
outer_args = args[1:-1]
|
||||
inner_args = apply_kw[1:-1]
|
||||
translate_str = ""
|
||||
|
||||
func_text = textwrap.dedent("""\
|
||||
def %(name)s(%(args)s):
|
||||
%(doc)r
|
||||
%(translate)s
|
||||
try:
|
||||
p = _proxy
|
||||
except NameError:
|
||||
_name_error('%(name)s')
|
||||
return _proxy.%(name)s(%(apply_kw)s)
|
||||
e
|
||||
""" % {
|
||||
'name': name,
|
||||
'translate': translate_str,
|
||||
'args': outer_args,
|
||||
'apply_kw': inner_args,
|
||||
'doc': fn.__doc__,
|
||||
})
|
||||
lcl = {}
|
||||
exec_(func_text, globals_, lcl)
|
||||
return lcl[name]
|
||||
|
||||
|
||||
def _with_legacy_names(translations):
|
||||
def decorate(fn):
|
||||
fn._legacy_translations = translations
|
||||
return fn
|
||||
|
||||
return decorate
|
||||
|
||||
|
||||
def asbool(value):
|
||||
return value is not None and \
|
||||
value.lower() == 'true'
|
||||
|
||||
|
||||
def rev_id():
|
||||
return uuid.uuid4().hex[-12:]
|
||||
|
||||
|
||||
def to_list(x, default=None):
|
||||
if x is None:
|
||||
return default
|
||||
elif isinstance(x, string_types):
|
||||
return [x]
|
||||
elif isinstance(x, collections.Iterable):
|
||||
return list(x)
|
||||
else:
|
||||
return [x]
|
||||
|
||||
|
||||
def to_tuple(x, default=None):
|
||||
if x is None:
|
||||
return default
|
||||
elif isinstance(x, string_types):
|
||||
return (x, )
|
||||
elif isinstance(x, collections.Iterable):
|
||||
return tuple(x)
|
||||
else:
|
||||
return (x, )
|
||||
|
||||
|
||||
def unique_list(seq, hashfunc=None):
|
||||
seen = set()
|
||||
seen_add = seen.add
|
||||
if not hashfunc:
|
||||
return [x for x in seq
|
||||
if x not in seen
|
||||
and not seen_add(x)]
|
||||
else:
|
||||
return [x for x in seq
|
||||
if hashfunc(x) not in seen
|
||||
and not seen_add(hashfunc(x))]
|
||||
|
||||
|
||||
def dedupe_tuple(tup):
|
||||
return tuple(unique_list(tup))
|
||||
|
||||
|
||||
|
||||
class memoized_property(object):
|
||||
|
||||
"""A read-only @property that is only evaluated once."""
|
||||
|
||||
def __init__(self, fget, doc=None):
|
||||
self.fget = fget
|
||||
self.__doc__ = doc or fget.__doc__
|
||||
self.__name__ = fget.__name__
|
||||
|
||||
def __get__(self, obj, cls):
|
||||
if obj is None:
|
||||
return self
|
||||
obj.__dict__[self.__name__] = result = self.fget(obj)
|
||||
return result
|
||||
|
||||
|
||||
class immutabledict(dict):
|
||||
|
||||
def _immutable(self, *arg, **kw):
|
||||
raise TypeError("%s object is immutable" % self.__class__.__name__)
|
||||
|
||||
__delitem__ = __setitem__ = __setattr__ = \
|
||||
clear = pop = popitem = setdefault = \
|
||||
update = _immutable
|
||||
|
||||
def __new__(cls, *args):
|
||||
new = dict.__new__(cls)
|
||||
dict.__init__(new, *args)
|
||||
return new
|
||||
|
||||
def __init__(self, *args):
|
||||
pass
|
||||
|
||||
def __reduce__(self):
|
||||
return immutabledict, (dict(self), )
|
||||
|
||||
def union(self, d):
|
||||
if not self:
|
||||
return immutabledict(d)
|
||||
else:
|
||||
d2 = immutabledict(self)
|
||||
dict.update(d2, d)
|
||||
return d2
|
||||
|
||||
def __repr__(self):
|
||||
return "immutabledict(%s)" % dict.__repr__(self)
|
||||
|
||||
|
||||
class Dispatcher(object):
|
||||
def __init__(self, uselist=False):
|
||||
self._registry = {}
|
||||
self.uselist = uselist
|
||||
|
||||
def dispatch_for(self, target, qualifier='default'):
|
||||
def decorate(fn):
|
||||
if self.uselist:
|
||||
self._registry.setdefault((target, qualifier), []).append(fn)
|
||||
else:
|
||||
assert (target, qualifier) not in self._registry
|
||||
self._registry[(target, qualifier)] = fn
|
||||
return fn
|
||||
return decorate
|
||||
|
||||
def dispatch(self, obj, qualifier='default'):
|
||||
|
||||
if isinstance(obj, string_types):
|
||||
targets = [obj]
|
||||
elif isinstance(obj, type):
|
||||
targets = obj.__mro__
|
||||
else:
|
||||
targets = type(obj).__mro__
|
||||
|
||||
for spcls in targets:
|
||||
if qualifier != 'default' and (spcls, qualifier) in self._registry:
|
||||
return self._fn_or_list(
|
||||
self._registry[(spcls, qualifier)])
|
||||
elif (spcls, 'default') in self._registry:
|
||||
return self._fn_or_list(
|
||||
self._registry[(spcls, 'default')])
|
||||
else:
|
||||
raise ValueError("no dispatch function for object: %s" % obj)
|
||||
|
||||
def _fn_or_list(self, fn_or_list):
|
||||
if self.uselist:
|
||||
def go(*arg, **kw):
|
||||
for fn in fn_or_list:
|
||||
fn(*arg, **kw)
|
||||
return go
|
||||
else:
|
||||
return fn_or_list
|
||||
|
||||
def branch(self):
|
||||
"""Return a copy of this dispatcher that is independently
|
||||
writable."""
|
||||
|
||||
d = Dispatcher()
|
||||
if self.uselist:
|
||||
d._registry.update(
|
||||
(k, [fn for fn in self._registry[k]])
|
||||
for k in self._registry
|
||||
)
|
||||
else:
|
||||
d._registry.update(self._registry)
|
||||
return d
|
|
@ -1,94 +0,0 @@
|
|||
from .compat import py27, binary_type, string_types
|
||||
import sys
|
||||
from sqlalchemy.engine import url
|
||||
import warnings
|
||||
import textwrap
|
||||
import collections
|
||||
import logging
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
if py27:
|
||||
# disable "no handler found" errors
|
||||
logging.getLogger('alembic').addHandler(logging.NullHandler())
|
||||
|
||||
|
||||
try:
|
||||
import fcntl
|
||||
import termios
|
||||
import struct
|
||||
ioctl = fcntl.ioctl(0, termios.TIOCGWINSZ,
|
||||
struct.pack('HHHH', 0, 0, 0, 0))
|
||||
_h, TERMWIDTH, _hp, _wp = struct.unpack('HHHH', ioctl)
|
||||
if TERMWIDTH <= 0: # can occur if running in emacs pseudo-tty
|
||||
TERMWIDTH = None
|
||||
except (ImportError, IOError):
|
||||
TERMWIDTH = None
|
||||
|
||||
|
||||
def write_outstream(stream, *text):
|
||||
encoding = getattr(stream, 'encoding', 'ascii') or 'ascii'
|
||||
for t in text:
|
||||
if not isinstance(t, binary_type):
|
||||
t = t.encode(encoding, 'replace')
|
||||
t = t.decode(encoding)
|
||||
try:
|
||||
stream.write(t)
|
||||
except IOError:
|
||||
# suppress "broken pipe" errors.
|
||||
# no known way to handle this on Python 3 however
|
||||
# as the exception is "ignored" (noisily) in TextIOWrapper.
|
||||
break
|
||||
|
||||
|
||||
def status(_statmsg, fn, *arg, **kw):
|
||||
msg(_statmsg + " ...", False)
|
||||
try:
|
||||
ret = fn(*arg, **kw)
|
||||
write_outstream(sys.stdout, " done\n")
|
||||
return ret
|
||||
except:
|
||||
write_outstream(sys.stdout, " FAILED\n")
|
||||
raise
|
||||
|
||||
|
||||
def err(message):
|
||||
log.error(message)
|
||||
msg("FAILED: %s" % message)
|
||||
sys.exit(-1)
|
||||
|
||||
|
||||
def obfuscate_url_pw(u):
|
||||
u = url.make_url(u)
|
||||
if u.password:
|
||||
u.password = 'XXXXX'
|
||||
return str(u)
|
||||
|
||||
|
||||
def warn(msg):
|
||||
warnings.warn(msg)
|
||||
|
||||
|
||||
def msg(msg, newline=True):
|
||||
if TERMWIDTH is None:
|
||||
write_outstream(sys.stdout, msg)
|
||||
if newline:
|
||||
write_outstream(sys.stdout, "\n")
|
||||
else:
|
||||
# left indent output lines
|
||||
lines = textwrap.wrap(msg, TERMWIDTH)
|
||||
if len(lines) > 1:
|
||||
for line in lines[0:-1]:
|
||||
write_outstream(sys.stdout, " ", line, "\n")
|
||||
write_outstream(sys.stdout, " ", lines[-1], ("\n" if newline else ""))
|
||||
|
||||
|
||||
def format_as_comma(value):
|
||||
if value is None:
|
||||
return ""
|
||||
elif isinstance(value, string_types):
|
||||
return value
|
||||
elif isinstance(value, collections.Iterable):
|
||||
return ", ".join(value)
|
||||
else:
|
||||
raise ValueError("Don't know how to comma-format %r" % value)
|
|
@ -1,103 +0,0 @@
|
|||
import sys
|
||||
import os
|
||||
import re
|
||||
from .compat import load_module_py, load_module_pyc
|
||||
from mako.template import Template
|
||||
from mako import exceptions
|
||||
import tempfile
|
||||
from .exc import CommandError
|
||||
|
||||
|
||||
def template_to_file(template_file, dest, output_encoding, **kw):
|
||||
template = Template(filename=template_file)
|
||||
try:
|
||||
output = template.render_unicode(**kw).encode(output_encoding)
|
||||
except:
|
||||
with tempfile.NamedTemporaryFile(suffix='.txt', delete=False) as ntf:
|
||||
ntf.write(
|
||||
exceptions.text_error_template().
|
||||
render_unicode().encode(output_encoding))
|
||||
fname = ntf.name
|
||||
raise CommandError(
|
||||
"Template rendering failed; see %s for a "
|
||||
"template-oriented traceback." % fname)
|
||||
else:
|
||||
with open(dest, 'wb') as f:
|
||||
f.write(output)
|
||||
|
||||
|
||||
def coerce_resource_to_filename(fname):
|
||||
"""Interpret a filename as either a filesystem location or as a package
|
||||
resource.
|
||||
|
||||
Names that are non absolute paths and contain a colon
|
||||
are interpreted as resources and coerced to a file location.
|
||||
|
||||
"""
|
||||
if not os.path.isabs(fname) and ":" in fname:
|
||||
import pkg_resources
|
||||
fname = pkg_resources.resource_filename(*fname.split(':'))
|
||||
return fname
|
||||
|
||||
|
||||
def simple_pyc_file_from_path(path):
|
||||
"""Given a python source path, return the so-called
|
||||
"sourceless" .pyc or .pyo path.
|
||||
|
||||
This just a .pyc or .pyo file where the .py file would be.
|
||||
|
||||
Even with PEP-3147, which normally puts .pyc/.pyo files in __pycache__,
|
||||
this use case remains supported as a so-called "sourceless module import".
|
||||
|
||||
"""
|
||||
if sys.flags.optimize:
|
||||
return path + "o" # e.g. .pyo
|
||||
else:
|
||||
return path + "c" # e.g. .pyc
|
||||
|
||||
|
||||
def pyc_file_from_path(path):
|
||||
"""Given a python source path, locate the .pyc.
|
||||
|
||||
See http://www.python.org/dev/peps/pep-3147/
|
||||
#detecting-pep-3147-availability
|
||||
http://www.python.org/dev/peps/pep-3147/#file-extension-checks
|
||||
|
||||
"""
|
||||
import imp
|
||||
has3147 = hasattr(imp, 'get_tag')
|
||||
if has3147:
|
||||
return imp.cache_from_source(path)
|
||||
else:
|
||||
return simple_pyc_file_from_path(path)
|
||||
|
||||
|
||||
def edit(path):
|
||||
"""Given a source path, run the EDITOR for it"""
|
||||
|
||||
import editor
|
||||
try:
|
||||
editor.edit(path)
|
||||
except Exception as exc:
|
||||
raise CommandError('Error executing editor (%s)' % (exc,))
|
||||
|
||||
|
||||
def load_python_file(dir_, filename):
|
||||
"""Load a file from the given path as a Python module."""
|
||||
|
||||
module_id = re.sub(r'\W', "_", filename)
|
||||
path = os.path.join(dir_, filename)
|
||||
_, ext = os.path.splitext(filename)
|
||||
if ext == ".py":
|
||||
if os.path.exists(path):
|
||||
module = load_module_py(module_id, path)
|
||||
elif os.path.exists(simple_pyc_file_from_path(path)):
|
||||
# look for sourceless load
|
||||
module = load_module_pyc(
|
||||
module_id, simple_pyc_file_from_path(path))
|
||||
else:
|
||||
raise ImportError("Can't find Python file %s" % path)
|
||||
elif ext in (".pyc", ".pyo"):
|
||||
module = load_module_pyc(module_id, path)
|
||||
del sys.modules[module_id]
|
||||
return module
|
|
@ -1,182 +0,0 @@
|
|||
import re
|
||||
from sqlalchemy import __version__
|
||||
from sqlalchemy.schema import ForeignKeyConstraint, CheckConstraint, Column
|
||||
from sqlalchemy import types as sqltypes
|
||||
from sqlalchemy import schema, sql
|
||||
from sqlalchemy.sql.visitors import traverse
|
||||
from sqlalchemy.ext.compiler import compiles
|
||||
from sqlalchemy.sql.expression import _BindParamClause
|
||||
from . import compat
|
||||
|
||||
|
||||
def _safe_int(value):
|
||||
try:
|
||||
return int(value)
|
||||
except:
|
||||
return value
|
||||
_vers = tuple(
|
||||
[_safe_int(x) for x in re.findall(r'(\d+|[abc]\d)', __version__)])
|
||||
sqla_07 = _vers > (0, 7, 2)
|
||||
sqla_079 = _vers >= (0, 7, 9)
|
||||
sqla_08 = _vers >= (0, 8, 0)
|
||||
sqla_083 = _vers >= (0, 8, 3)
|
||||
sqla_084 = _vers >= (0, 8, 4)
|
||||
sqla_09 = _vers >= (0, 9, 0)
|
||||
sqla_092 = _vers >= (0, 9, 2)
|
||||
sqla_094 = _vers >= (0, 9, 4)
|
||||
sqla_094 = _vers >= (0, 9, 4)
|
||||
sqla_099 = _vers >= (0, 9, 9)
|
||||
sqla_100 = _vers >= (1, 0, 0)
|
||||
sqla_105 = _vers >= (1, 0, 5)
|
||||
sqla_1010 = _vers >= (1, 0, 10)
|
||||
sqla_110 = _vers >= (1, 1, 0)
|
||||
sqla_1014 = _vers >= (1, 0, 14)
|
||||
|
||||
if sqla_08:
|
||||
from sqlalchemy.sql.expression import TextClause
|
||||
else:
|
||||
from sqlalchemy.sql.expression import _TextClause as TextClause
|
||||
|
||||
|
||||
def _table_for_constraint(constraint):
|
||||
if isinstance(constraint, ForeignKeyConstraint):
|
||||
return constraint.parent
|
||||
else:
|
||||
return constraint.table
|
||||
|
||||
|
||||
def _columns_for_constraint(constraint):
|
||||
if isinstance(constraint, ForeignKeyConstraint):
|
||||
return [fk.parent for fk in constraint.elements]
|
||||
elif isinstance(constraint, CheckConstraint):
|
||||
return _find_columns(constraint.sqltext)
|
||||
else:
|
||||
return list(constraint.columns)
|
||||
|
||||
|
||||
def _fk_spec(constraint):
|
||||
if sqla_100:
|
||||
source_columns = [
|
||||
constraint.columns[key].name for key in constraint.column_keys]
|
||||
else:
|
||||
source_columns = [
|
||||
element.parent.name for element in constraint.elements]
|
||||
|
||||
source_table = constraint.parent.name
|
||||
source_schema = constraint.parent.schema
|
||||
target_schema = constraint.elements[0].column.table.schema
|
||||
target_table = constraint.elements[0].column.table.name
|
||||
target_columns = [element.column.name for element in constraint.elements]
|
||||
ondelete = constraint.ondelete
|
||||
onupdate = constraint.onupdate
|
||||
deferrable = constraint.deferrable
|
||||
initially = constraint.initially
|
||||
return (
|
||||
source_schema, source_table,
|
||||
source_columns, target_schema, target_table, target_columns,
|
||||
onupdate, ondelete, deferrable, initially)
|
||||
|
||||
|
||||
def _fk_is_self_referential(constraint):
|
||||
spec = constraint.elements[0]._get_colspec()
|
||||
tokens = spec.split(".")
|
||||
tokens.pop(-1) # colname
|
||||
tablekey = ".".join(tokens)
|
||||
return tablekey == constraint.parent.key
|
||||
|
||||
|
||||
def _is_type_bound(constraint):
|
||||
# this deals with SQLAlchemy #3260, don't copy CHECK constraints
|
||||
# that will be generated by the type.
|
||||
if sqla_100:
|
||||
# new feature added for #3260
|
||||
return constraint._type_bound
|
||||
else:
|
||||
# old way, look at what we know Boolean/Enum to use
|
||||
return (
|
||||
constraint._create_rule is not None and
|
||||
isinstance(
|
||||
getattr(constraint._create_rule, "target", None),
|
||||
sqltypes.SchemaType)
|
||||
)
|
||||
|
||||
|
||||
def _find_columns(clause):
|
||||
"""locate Column objects within the given expression."""
|
||||
|
||||
cols = set()
|
||||
traverse(clause, {}, {'column': cols.add})
|
||||
return cols
|
||||
|
||||
|
||||
def _textual_index_column(table, text_):
|
||||
"""a workaround for the Index construct's severe lack of flexibility"""
|
||||
if isinstance(text_, compat.string_types):
|
||||
c = Column(text_, sqltypes.NULLTYPE)
|
||||
table.append_column(c)
|
||||
return c
|
||||
elif isinstance(text_, TextClause):
|
||||
return _textual_index_element(table, text_)
|
||||
else:
|
||||
raise ValueError("String or text() construct expected")
|
||||
|
||||
|
||||
class _textual_index_element(sql.ColumnElement):
|
||||
"""Wrap around a sqlalchemy text() construct in such a way that
|
||||
we appear like a column-oriented SQL expression to an Index
|
||||
construct.
|
||||
|
||||
The issue here is that currently the Postgresql dialect, the biggest
|
||||
recipient of functional indexes, keys all the index expressions to
|
||||
the corresponding column expressions when rendering CREATE INDEX,
|
||||
so the Index we create here needs to have a .columns collection that
|
||||
is the same length as the .expressions collection. Ultimately
|
||||
SQLAlchemy should support text() expressions in indexes.
|
||||
|
||||
See https://bitbucket.org/zzzeek/sqlalchemy/issue/3174/\
|
||||
support-text-sent-to-indexes
|
||||
|
||||
"""
|
||||
__visit_name__ = '_textual_idx_element'
|
||||
|
||||
def __init__(self, table, text):
|
||||
self.table = table
|
||||
self.text = text
|
||||
self.key = text.text
|
||||
self.fake_column = schema.Column(self.text.text, sqltypes.NULLTYPE)
|
||||
table.append_column(self.fake_column)
|
||||
|
||||
def get_children(self):
|
||||
return [self.fake_column]
|
||||
|
||||
|
||||
@compiles(_textual_index_element)
|
||||
def _render_textual_index_column(element, compiler, **kw):
|
||||
return compiler.process(element.text, **kw)
|
||||
|
||||
|
||||
class _literal_bindparam(_BindParamClause):
|
||||
pass
|
||||
|
||||
|
||||
@compiles(_literal_bindparam)
|
||||
def _render_literal_bindparam(element, compiler, **kw):
|
||||
return compiler.render_literal_bindparam(element, **kw)
|
||||
|
||||
|
||||
def _get_index_expressions(idx):
|
||||
if sqla_08:
|
||||
return list(idx.expressions)
|
||||
else:
|
||||
return list(idx.columns)
|
||||
|
||||
|
||||
def _get_index_column_names(idx):
|
||||
return [getattr(exp, "name", None) for exp in _get_index_expressions(idx)]
|
||||
|
||||
|
||||
def _get_index_final_name(dialect, idx):
|
||||
if sqla_08:
|
||||
return dialect.ddl_compiler(dialect, None)._prepared_index_name(idx)
|
||||
else:
|
||||
return idx.name
|
|
@ -1,95 +0,0 @@
|
|||
# Makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS =
|
||||
SPHINXBUILD = sphinx-build
|
||||
PAPER =
|
||||
BUILDDIR = output
|
||||
|
||||
# Internal variables.
|
||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||
PAPEROPT_letter = -D latex_paper_size=letter
|
||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
||||
|
||||
.PHONY: help clean html dirhtml pickle json htmlhelp qthelp latex changes linkcheck doctest
|
||||
|
||||
help:
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
@echo " html to make standalone HTML files"
|
||||
@echo " dist-html same as html, but places files in /doc"
|
||||
@echo " dirhtml to make HTML files named index.html in directories"
|
||||
@echo " pickle to make pickle files"
|
||||
@echo " json to make JSON files"
|
||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
||||
@echo " qthelp to make HTML files and a qthelp project"
|
||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
||||
@echo " changes to make an overview of all changed/added/deprecated items"
|
||||
@echo " linkcheck to check all external links for integrity"
|
||||
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
||||
|
||||
clean:
|
||||
-rm -rf $(BUILDDIR)/*
|
||||
|
||||
html:
|
||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
||||
|
||||
dist-html:
|
||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) ..
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in ../."
|
||||
|
||||
dirhtml:
|
||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
||||
|
||||
pickle:
|
||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
||||
@echo
|
||||
@echo "Build finished; now you can process the pickle files."
|
||||
|
||||
json:
|
||||
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
||||
@echo
|
||||
@echo "Build finished; now you can process the JSON files."
|
||||
|
||||
htmlhelp:
|
||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
||||
".hhp project file in $(BUILDDIR)/htmlhelp."
|
||||
|
||||
qthelp:
|
||||
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
||||
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
||||
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Alembic.qhcp"
|
||||
@echo "To view the help file:"
|
||||
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Alembic.qhc"
|
||||
|
||||
latex:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo
|
||||
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
||||
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
|
||||
"run these through (pdf)latex."
|
||||
|
||||
changes:
|
||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
||||
@echo
|
||||
@echo "The overview file is in $(BUILDDIR)/changes."
|
||||
|
||||
linkcheck:
|
||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
||||
@echo
|
||||
@echo "Link check complete; look for any errors in the above output " \
|
||||
"or in $(BUILDDIR)/linkcheck/output.txt."
|
||||
|
||||
doctest:
|
||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
||||
@echo "Testing of doctests in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/doctest/output.txt."
|
|
@ -1,16 +0,0 @@
|
|||
@import url("nature.css");
|
||||
@import url("site_custom_css.css");
|
||||
|
||||
|
||||
.versionadded, .versionchanged, .deprecated {
|
||||
background-color: #FFFFCC;
|
||||
border: 1px solid #FFFF66;
|
||||
margin-bottom: 10px;
|
||||
margin-top: 10px;
|
||||
padding: 7px;
|
||||
}
|
||||
|
||||
.versionadded > p > span, .versionchanged > p > span, .deprecated > p > span{
|
||||
font-style: italic;
|
||||
}
|
||||
|
Binary file not shown.
Before Width: | Height: | Size: 121 KiB |
|
@ -1,644 +0,0 @@
|
|||
.. _alembic.autogenerate.toplevel:
|
||||
|
||||
==============
|
||||
Autogeneration
|
||||
==============
|
||||
|
||||
.. note:: this section discusses the **internal API of Alembic**
|
||||
as regards the autogeneration feature of the ``alembic revision``
|
||||
command.
|
||||
This section is only useful for developers who wish to extend the
|
||||
capabilities of Alembic. For general documentation on the autogenerate
|
||||
feature, please see :doc:`/autogenerate`.
|
||||
|
||||
The autogeneration system has a wide degree of public API, including
|
||||
the following areas:
|
||||
|
||||
1. The ability to do a "diff" of a :class:`~sqlalchemy.schema.MetaData` object against
|
||||
a database, and receive a data structure back. This structure
|
||||
is available either as a rudimentary list of changes, or as
|
||||
a :class:`.MigrateOperation` structure.
|
||||
|
||||
2. The ability to alter how the ``alembic revision`` command generates
|
||||
revision scripts, including support for multiple revision scripts
|
||||
generated in one pass.
|
||||
|
||||
3. The ability to add new operation directives to autogeneration, including
|
||||
custom schema/model comparison functions and revision script rendering.
|
||||
|
||||
Getting Diffs
|
||||
==============
|
||||
|
||||
The simplest API autogenerate provides is the "schema comparison" API;
|
||||
these are simple functions that will run all registered "comparison" functions
|
||||
between a :class:`~sqlalchemy.schema.MetaData` object and a database
|
||||
backend to produce a structure showing how they differ. The two
|
||||
functions provided are :func:`.compare_metadata`, which is more of the
|
||||
"legacy" function that produces diff tuples, and :func:`.produce_migrations`,
|
||||
which produces a structure consisting of operation directives detailed in
|
||||
:ref:`alembic.operations.toplevel`.
|
||||
|
||||
|
||||
.. autofunction:: alembic.autogenerate.compare_metadata
|
||||
|
||||
.. autofunction:: alembic.autogenerate.produce_migrations
|
||||
|
||||
.. _customizing_revision:
|
||||
|
||||
Customizing Revision Generation
|
||||
==========================================
|
||||
|
||||
.. versionadded:: 0.8.0 - the ``alembic revision`` system is now customizable.
|
||||
|
||||
The ``alembic revision`` command, also available programmatically
|
||||
via :func:`.command.revision`, essentially produces a single migration
|
||||
script after being run. Whether or not the ``--autogenerate`` option
|
||||
was specified basically determines if this script is a blank revision
|
||||
script with empty ``upgrade()`` and ``downgrade()`` functions, or was
|
||||
produced with alembic operation directives as the result of autogenerate.
|
||||
|
||||
In either case, the system creates a full plan of what is to be done
|
||||
in the form of a :class:`.MigrateOperation` structure, which is then
|
||||
used to produce the script.
|
||||
|
||||
For example, suppose we ran ``alembic revision --autogenerate``, and the
|
||||
end result was that it produced a new revision ``'eced083f5df'``
|
||||
with the following contents::
|
||||
|
||||
"""create the organization table."""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = 'eced083f5df'
|
||||
down_revision = 'beafc7d709f'
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
def upgrade():
|
||||
op.create_table(
|
||||
'organization',
|
||||
sa.Column('id', sa.Integer(), primary_key=True),
|
||||
sa.Column('name', sa.String(50), nullable=False)
|
||||
)
|
||||
op.add_column(
|
||||
'user',
|
||||
sa.Column('organization_id', sa.Integer())
|
||||
)
|
||||
op.create_foreign_key(
|
||||
'org_fk', 'user', 'organization', ['organization_id'], ['id']
|
||||
)
|
||||
|
||||
def downgrade():
|
||||
op.drop_constraint('org_fk', 'user')
|
||||
op.drop_column('user', 'organization_id')
|
||||
op.drop_table('organization')
|
||||
|
||||
The above script is generated by a :class:`.MigrateOperation` structure
|
||||
that looks like this::
|
||||
|
||||
from alembic.operations import ops
|
||||
import sqlalchemy as sa
|
||||
|
||||
migration_script = ops.MigrationScript(
|
||||
'eced083f5df',
|
||||
ops.UpgradeOps(
|
||||
ops=[
|
||||
ops.CreateTableOp(
|
||||
'organization',
|
||||
[
|
||||
sa.Column('id', sa.Integer(), primary_key=True),
|
||||
sa.Column('name', sa.String(50), nullable=False)
|
||||
]
|
||||
),
|
||||
ops.ModifyTableOps(
|
||||
'user',
|
||||
ops=[
|
||||
ops.AddColumnOp(
|
||||
'user',
|
||||
sa.Column('organization_id', sa.Integer())
|
||||
),
|
||||
ops.CreateForeignKeyOp(
|
||||
'org_fk', 'user', 'organization',
|
||||
['organization_id'], ['id']
|
||||
)
|
||||
]
|
||||
)
|
||||
]
|
||||
),
|
||||
ops.DowngradeOps(
|
||||
ops=[
|
||||
ops.ModifyTableOps(
|
||||
'user',
|
||||
ops=[
|
||||
ops.DropConstraintOp('org_fk', 'user'),
|
||||
ops.DropColumnOp('user', 'organization_id')
|
||||
]
|
||||
),
|
||||
ops.DropTableOp('organization')
|
||||
]
|
||||
),
|
||||
message='create the organization table.'
|
||||
)
|
||||
|
||||
When we deal with a :class:`.MigrationScript` structure, we can render
|
||||
the upgrade/downgrade sections into strings for debugging purposes
|
||||
using the :func:`.render_python_code` helper function::
|
||||
|
||||
from alembic.autogenerate import render_python_code
|
||||
print(render_python_code(migration_script.upgrade_ops))
|
||||
|
||||
Renders::
|
||||
|
||||
### commands auto generated by Alembic - please adjust! ###
|
||||
op.create_table('organization',
|
||||
sa.Column('id', sa.Integer(), nullable=False),
|
||||
sa.Column('name', sa.String(length=50), nullable=False),
|
||||
sa.PrimaryKeyConstraint('id')
|
||||
)
|
||||
op.add_column('user', sa.Column('organization_id', sa.Integer(), nullable=True))
|
||||
op.create_foreign_key('org_fk', 'user', 'organization', ['organization_id'], ['id'])
|
||||
### end Alembic commands ###
|
||||
|
||||
Given that structures like the above are used to generate new revision
|
||||
files, and that we'd like to be able to alter these as they are created,
|
||||
we then need a system to access this structure when the
|
||||
:func:`.command.revision` command is used. The
|
||||
:paramref:`.EnvironmentContext.configure.process_revision_directives`
|
||||
parameter gives us a way to alter this. This is a function that
|
||||
is passed the above structure as generated by Alembic, giving us a chance
|
||||
to alter it.
|
||||
For example, if we wanted to put all the "upgrade" operations into
|
||||
a certain branch, and we wanted our script to not have any "downgrade"
|
||||
operations at all, we could build an extension as follows, illustrated
|
||||
within an ``env.py`` script::
|
||||
|
||||
def process_revision_directives(context, revision, directives):
|
||||
script = directives[0]
|
||||
|
||||
# set specific branch
|
||||
script.head = "mybranch@head"
|
||||
|
||||
# erase downgrade operations
|
||||
script.downgrade_ops.ops[:] = []
|
||||
|
||||
# ...
|
||||
|
||||
def run_migrations_online():
|
||||
|
||||
# ...
|
||||
with engine.connect() as connection:
|
||||
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
process_revision_directives=process_revision_directives)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
Above, the ``directives`` argument is a Python list. We may alter the
|
||||
given structure within this list in-place, or replace it with a new
|
||||
structure consisting of zero or more :class:`.MigrationScript` directives.
|
||||
The :func:`.command.revision` command will then produce scripts corresponding
|
||||
to whatever is in this list.
|
||||
|
||||
.. autofunction:: alembic.autogenerate.render_python_code
|
||||
|
||||
.. _autogen_rewriter:
|
||||
|
||||
Fine-Grained Autogenerate Generation with Rewriters
|
||||
---------------------------------------------------
|
||||
|
||||
The preceding example illustrated how we can make a simple change to the
|
||||
structure of the operation directives to produce new autogenerate output.
|
||||
For the case where we want to affect very specific parts of the autogenerate
|
||||
stream, we can make a function for
|
||||
:paramref:`.EnvironmentContext.configure.process_revision_directives`
|
||||
which traverses through the whole :class:`.MigrationScript` structure, locates
|
||||
the elements we care about and modifies them in-place as needed. However,
|
||||
to reduce the boilerplate associated with this task, we can use the
|
||||
:class:`.Rewriter` object to make this easier. :class:`.Rewriter` gives
|
||||
us an object that we can pass directly to
|
||||
:paramref:`.EnvironmentContext.configure.process_revision_directives` which
|
||||
we can also attach handler functions onto, keyed to specific types of
|
||||
constructs.
|
||||
|
||||
Below is an example where we rewrite :class:`.ops.AddColumnOp` directives;
|
||||
based on whether or not the new column is "nullable", we either return
|
||||
the existing directive, or we return the existing directive with
|
||||
the nullable flag changed, inside of a list with a second directive
|
||||
to alter the nullable flag in a second step::
|
||||
|
||||
# ... fragmented env.py script ....
|
||||
|
||||
from alembic.autogenerate import rewriter
|
||||
from alembic.operations import ops
|
||||
|
||||
writer = rewriter.Rewriter()
|
||||
|
||||
@writer.rewrites(ops.AddColumnOp)
|
||||
def add_column(context, revision, op):
|
||||
if op.column.nullable:
|
||||
return op
|
||||
else:
|
||||
op.column.nullable = True
|
||||
return [
|
||||
op,
|
||||
ops.AlterColumnOp(
|
||||
op.table_name,
|
||||
op.column.name,
|
||||
modify_nullable=False,
|
||||
existing_type=op.column.type,
|
||||
)
|
||||
]
|
||||
|
||||
# ... later ...
|
||||
|
||||
def run_migrations_online():
|
||||
# ...
|
||||
|
||||
with connectable.connect() as connection:
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
process_revision_directives=writer
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
Above, in a full :class:`.ops.MigrationScript` structure, the
|
||||
:class:`.AddColumn` directives would be present within
|
||||
the paths ``MigrationScript->UpgradeOps->ModifyTableOps``
|
||||
and ``MigrationScript->DowngradeOps->ModifyTableOps``. The
|
||||
:class:`.Rewriter` handles traversing into these structures as well
|
||||
as rewriting them as needed so that we only need to code for the specific
|
||||
object we care about.
|
||||
|
||||
|
||||
.. autoclass:: alembic.autogenerate.rewriter.Rewriter
|
||||
:members:
|
||||
|
||||
.. _autogen_customizing_multiengine_revision:
|
||||
|
||||
Revision Generation with Multiple Engines / ``run_migrations()`` calls
|
||||
----------------------------------------------------------------------
|
||||
|
||||
A lesser-used technique which allows autogenerated migrations to run
|
||||
against multiple databse backends at once, generating changes into
|
||||
a single migration script, is illustrated in the
|
||||
provided ``multidb`` template. This template features a special ``env.py``
|
||||
which iterates through multiple :class:`~sqlalchemy.engine.Engine` instances
|
||||
and calls upon :meth:`.MigrationContext.run_migrations` for each::
|
||||
|
||||
for name, rec in engines.items():
|
||||
logger.info("Migrating database %s" % name)
|
||||
context.configure(
|
||||
connection=rec['connection'],
|
||||
upgrade_token="%s_upgrades" % name,
|
||||
downgrade_token="%s_downgrades" % name,
|
||||
target_metadata=target_metadata.get(name)
|
||||
)
|
||||
context.run_migrations(engine_name=name)
|
||||
|
||||
Above, :meth:`.MigrationContext.run_migrations` is run multiple times,
|
||||
once for each engine. Within the context of autogeneration, each time
|
||||
the method is called the :paramref:`~.EnvironmentContext.configure.upgrade_token`
|
||||
and :paramref:`~.EnvironmentContext.configure.downgrade_token` parameters
|
||||
are changed, so that the collection of template variables gains distinct
|
||||
entries for each engine, which are then referred to explicitly
|
||||
within ``script.py.mako``.
|
||||
|
||||
In terms of the
|
||||
:paramref:`.EnvironmentContext.configure.process_revision_directives` hook,
|
||||
the behavior here is that the ``process_revision_directives`` hook
|
||||
is invoked **multiple times, once for each call to
|
||||
context.run_migrations()**. This means that if
|
||||
a multi-``run_migrations()`` approach is to be combined with the
|
||||
``process_revision_directives`` hook, care must be taken to use the
|
||||
hook appropriately.
|
||||
|
||||
The first point to note is that when a **second** call to
|
||||
``run_migrations()`` occurs, the ``.upgrade_ops`` and ``.downgrade_ops``
|
||||
attributes are **converted into Python lists**, and new
|
||||
:class:`.UpgradeOps` and :class:`.DowngradeOps` objects are appended
|
||||
to these lists. Each :class:`.UpgradeOps` and :class:`.DowngradeOps`
|
||||
object maintains an ``.upgrade_token`` and a ``.downgrade_token`` attribute
|
||||
respectively, which serves to render their contents into the appropriate
|
||||
template token.
|
||||
|
||||
For example, a multi-engine run that has the engine names ``engine1``
|
||||
and ``engine2`` will generate tokens of ``engine1_upgrades``,
|
||||
``engine1_downgrades``, ``engine2_upgrades`` and ``engine2_downgrades`` as
|
||||
it runs. The resulting migration structure would look like this::
|
||||
|
||||
from alembic.operations import ops
|
||||
import sqlalchemy as sa
|
||||
|
||||
migration_script = ops.MigrationScript(
|
||||
'eced083f5df',
|
||||
[
|
||||
ops.UpgradeOps(
|
||||
ops=[
|
||||
# upgrade operations for "engine1"
|
||||
],
|
||||
upgrade_token="engine1_upgrades"
|
||||
),
|
||||
ops.UpgradeOps(
|
||||
ops=[
|
||||
# upgrade operations for "engine2"
|
||||
],
|
||||
upgrade_token="engine2_upgrades"
|
||||
),
|
||||
],
|
||||
[
|
||||
ops.DowngradeOps(
|
||||
ops=[
|
||||
# downgrade operations for "engine1"
|
||||
],
|
||||
downgrade_token="engine1_downgrades"
|
||||
),
|
||||
ops.DowngradeOps(
|
||||
ops=[
|
||||
# downgrade operations for "engine2"
|
||||
],
|
||||
downgrade_token="engine2_downgrades"
|
||||
)
|
||||
],
|
||||
message='migration message'
|
||||
)
|
||||
|
||||
|
||||
Given the above, the following guidelines should be considered when
|
||||
the ``env.py`` script calls upon :meth:`.MigrationContext.run_migrations`
|
||||
mutiple times when running autogenerate:
|
||||
|
||||
* If the ``process_revision_directives`` hook aims to **add elements
|
||||
based on inspection of the current database /
|
||||
connection**, it should do its operation **on each iteration**. This is
|
||||
so that each time the hook runs, the database is available.
|
||||
|
||||
* Alternatively, if the ``process_revision_directives`` hook aims to
|
||||
**modify the list of migration directives in place**, this should
|
||||
be called **only on the last iteration**. This is so that the hook
|
||||
isn't being given an ever-growing structure each time which it has already
|
||||
modified previously.
|
||||
|
||||
* The :class:`.Rewriter` object, if used, should be called **only on the
|
||||
last iteration**, because it will always deliver all directives every time,
|
||||
so again to avoid double/triple/etc. processing of directives it should
|
||||
be called only when the structure is complete.
|
||||
|
||||
* The :attr:`.MigrationScript.upgrade_ops_list` and
|
||||
:attr:`.MigrationScript.downgrade_ops_list` attributes should be consulted
|
||||
when referring to the collection of :class:`.UpgradeOps` and
|
||||
:class:`.DowngradeOps` objects.
|
||||
|
||||
.. versionchanged:: 0.8.1 - multiple calls to
|
||||
:meth:`.MigrationContext.run_migrations` within an autogenerate operation,
|
||||
such as that proposed within the ``multidb`` script template,
|
||||
are now accommodated by the new extensible migration system
|
||||
introduced in 0.8.0.
|
||||
|
||||
|
||||
.. _autogen_custom_ops:
|
||||
|
||||
Autogenerating Custom Operation Directives
|
||||
==========================================
|
||||
|
||||
In the section :ref:`operation_plugins`, we talked about adding new
|
||||
subclasses of :class:`.MigrateOperation` in order to add new ``op.``
|
||||
directives. In the preceding section :ref:`customizing_revision`, we
|
||||
also learned that these same :class:`.MigrateOperation` structures are at
|
||||
the base of how the autogenerate system knows what Python code to render.
|
||||
Using this knowledge, we can create additional functions that plug into
|
||||
the autogenerate system so that our new operations can be generated
|
||||
into migration scripts when ``alembic revision --autogenerate`` is run.
|
||||
|
||||
The following sections will detail an example of this using the
|
||||
the ``CreateSequenceOp`` and ``DropSequenceOp`` directives
|
||||
we created in :ref:`operation_plugins`, which correspond to the
|
||||
SQLAlchemy :class:`~sqlalchemy.schema.Sequence` construct.
|
||||
|
||||
.. versionadded:: 0.8.0 - custom operations can be added to the
|
||||
autogenerate system to support new kinds of database objects.
|
||||
|
||||
Tracking our Object with the Model
|
||||
----------------------------------
|
||||
|
||||
The basic job of an autogenerate comparison function is to inspect
|
||||
a series of objects in the database and compare them against a series
|
||||
of objects defined in our model. By "in our model", we mean anything
|
||||
defined in Python code that we want to track, however most commonly
|
||||
we're talking about a series of :class:`~sqlalchemy.schema.Table`
|
||||
objects present in a :class:`~sqlalchemy.schema.MetaData` collection.
|
||||
|
||||
Let's propose a simple way of seeing what :class:`~sqlalchemy.schema.Sequence`
|
||||
objects we want to ensure exist in the database when autogenerate
|
||||
runs. While these objects do have some integrations with
|
||||
:class:`~sqlalchemy.schema.Table` and :class:`~sqlalchemy.schema.MetaData`
|
||||
already, let's assume they don't, as the example here intends to illustrate
|
||||
how we would do this for most any kind of custom construct. We
|
||||
associate the object with the :attr:`~sqlalchemy.schema.MetaData.info`
|
||||
collection of :class:`~sqlalchemy.schema.MetaData`, which is a dictionary
|
||||
we can use for anything, which we also know will be passed to the autogenerate
|
||||
process::
|
||||
|
||||
from sqlalchemy.schema import Sequence
|
||||
|
||||
def add_sequence_to_model(sequence, metadata):
|
||||
metadata.info.setdefault("sequences", set()).add(
|
||||
(sequence.schema, sequence.name)
|
||||
)
|
||||
|
||||
my_seq = Sequence("my_sequence")
|
||||
add_sequence_to_model(my_seq, model_metadata)
|
||||
|
||||
The :attr:`~sqlalchemy.schema.MetaData.info`
|
||||
dictionary is a good place to put things that we want our autogeneration
|
||||
routines to be able to locate, which can include any object such as
|
||||
custom DDL objects representing views, triggers, special constraints,
|
||||
or anything else we want to support.
|
||||
|
||||
|
||||
Registering a Comparison Function
|
||||
---------------------------------
|
||||
|
||||
We now need to register a comparison hook, which will be used
|
||||
to compare the database to our model and produce ``CreateSequenceOp``
|
||||
and ``DropSequenceOp`` directives to be included in our migration
|
||||
script. Note that we are assuming a
|
||||
Postgresql backend::
|
||||
|
||||
from alembic.autogenerate import comparators
|
||||
|
||||
@comparators.dispatch_for("schema")
|
||||
def compare_sequences(autogen_context, upgrade_ops, schemas):
|
||||
all_conn_sequences = set()
|
||||
|
||||
for sch in schemas:
|
||||
|
||||
all_conn_sequences.update([
|
||||
(sch, row[0]) for row in
|
||||
autogen_context.connection.execute(
|
||||
"SELECT relname FROM pg_class c join "
|
||||
"pg_namespace n on n.oid=c.relnamespace where "
|
||||
"relkind='S' and n.nspname=%(nspname)s",
|
||||
|
||||
# note that we consider a schema of 'None' in our
|
||||
# model to be the "default" name in the PG database;
|
||||
# this usually is the name 'public'
|
||||
nspname=autogen_context.dialect.default_schema_name
|
||||
if sch is None else sch
|
||||
)
|
||||
])
|
||||
|
||||
# get the collection of Sequence objects we're storing with
|
||||
# our MetaData
|
||||
metadata_sequences = autogen_context.metadata.info.setdefault(
|
||||
"sequences", set())
|
||||
|
||||
# for new names, produce CreateSequenceOp directives
|
||||
for sch, name in metadata_sequences.difference(all_conn_sequences):
|
||||
upgrade_ops.ops.append(
|
||||
CreateSequenceOp(name, schema=sch)
|
||||
)
|
||||
|
||||
# for names that are going away, produce DropSequenceOp
|
||||
# directives
|
||||
for sch, name in all_conn_sequences.difference(metadata_sequences):
|
||||
upgrade_ops.ops.append(
|
||||
DropSequenceOp(name, schema=sch)
|
||||
)
|
||||
|
||||
Above, we've built a new function ``compare_sequences()`` and registered
|
||||
it as a "schema" level comparison function with autogenerate. The
|
||||
job that it performs is that it compares the list of sequence names
|
||||
present in each database schema with that of a list of sequence names
|
||||
that we are maintaining in our :class:`~sqlalchemy.schema.MetaData` object.
|
||||
|
||||
When autogenerate completes, it will have a series of
|
||||
``CreateSequenceOp`` and ``DropSequenceOp`` directives in the list of
|
||||
"upgrade" operations; the list of "downgrade" operations is generated
|
||||
directly from these using the
|
||||
``CreateSequenceOp.reverse()`` and ``DropSequenceOp.reverse()`` methods
|
||||
that we've implemented on these objects.
|
||||
|
||||
The registration of our function at the scope of "schema" means our
|
||||
autogenerate comparison function is called outside of the context
|
||||
of any specific table or column. The three available scopes
|
||||
are "schema", "table", and "column", summarized as follows:
|
||||
|
||||
* **Schema level** - these hooks are passed a :class:`.AutogenContext`,
|
||||
an :class:`.UpgradeOps` collection, and a collection of string schema
|
||||
names to be operated upon. If the
|
||||
:class:`.UpgradeOps` collection contains changes after all
|
||||
hooks are run, it is included in the migration script:
|
||||
|
||||
::
|
||||
|
||||
@comparators.dispatch_for("schema")
|
||||
def compare_schema_level(autogen_context, upgrade_ops, schemas):
|
||||
pass
|
||||
|
||||
* **Table level** - these hooks are passed a :class:`.AutogenContext`,
|
||||
a :class:`.ModifyTableOps` collection, a schema name, table name,
|
||||
a :class:`~sqlalchemy.schema.Table` reflected from the database if any
|
||||
or ``None``, and a :class:`~sqlalchemy.schema.Table` present in the
|
||||
local :class:`~sqlalchemy.schema.MetaData`. If the
|
||||
:class:`.ModifyTableOps` collection contains changes after all
|
||||
hooks are run, it is included in the migration script:
|
||||
|
||||
::
|
||||
|
||||
@comparators.dispatch_for("table")
|
||||
def compare_table_level(autogen_context, modify_ops,
|
||||
schemaname, tablename, conn_table, metadata_table):
|
||||
pass
|
||||
|
||||
* **Column level** - these hooks are passed a :class:`.AutogenContext`,
|
||||
an :class:`.AlterColumnOp` object, a schema name, table name,
|
||||
column name, a :class:`~sqlalchemy.schema.Column` reflected from the
|
||||
database and a :class:`~sqlalchemy.schema.Column` present in the
|
||||
local table. If the :class:`.AlterColumnOp` contains changes after
|
||||
all hooks are run, it is included in the migration script;
|
||||
a "change" is considered to be present if any of the ``modify_`` attributes
|
||||
are set to a non-default value, or there are any keys
|
||||
in the ``.kw`` collection with the prefix ``"modify_"``:
|
||||
|
||||
::
|
||||
|
||||
@comparators.dispatch_for("column")
|
||||
def compare_column_level(autogen_context, alter_column_op,
|
||||
schemaname, tname, cname, conn_col, metadata_col):
|
||||
pass
|
||||
|
||||
The :class:`.AutogenContext` passed to these hooks is documented below.
|
||||
|
||||
.. autoclass:: alembic.autogenerate.api.AutogenContext
|
||||
:members:
|
||||
|
||||
Creating a Render Function
|
||||
--------------------------
|
||||
|
||||
The second autogenerate integration hook is to provide a "render" function;
|
||||
since the autogenerate
|
||||
system renders Python code, we need to build a function that renders
|
||||
the correct "op" instructions for our directive::
|
||||
|
||||
from alembic.autogenerate import renderers
|
||||
|
||||
@renderers.dispatch_for(CreateSequenceOp)
|
||||
def render_create_sequence(autogen_context, op):
|
||||
return "op.create_sequence(%r, **%r)" % (
|
||||
op.sequence_name,
|
||||
{"schema": op.schema}
|
||||
)
|
||||
|
||||
|
||||
@renderers.dispatch_for(DropSequenceOp)
|
||||
def render_drop_sequence(autogen_context, op):
|
||||
return "op.drop_sequence(%r, **%r)" % (
|
||||
op.sequence_name,
|
||||
{"schema": op.schema}
|
||||
)
|
||||
|
||||
The above functions will render Python code corresponding to the
|
||||
presence of ``CreateSequenceOp`` and ``DropSequenceOp`` instructions
|
||||
in the list that our comparison function generates.
|
||||
|
||||
Running It
|
||||
----------
|
||||
|
||||
All the above code can be organized however the developer sees fit;
|
||||
the only thing that needs to make it work is that when the
|
||||
Alembic environment ``env.py`` is invoked, it either imports modules
|
||||
which contain all the above routines, or they are locally present,
|
||||
or some combination thereof.
|
||||
|
||||
If we then have code in our model (which of course also needs to be invoked
|
||||
when ``env.py`` runs!) like this::
|
||||
|
||||
from sqlalchemy.schema import Sequence
|
||||
|
||||
my_seq_1 = Sequence("my_sequence_1")
|
||||
add_sequence_to_model(my_seq_1, target_metadata)
|
||||
|
||||
When we first run ``alembic revision --autogenerate``, we'll see this
|
||||
in our migration file::
|
||||
|
||||
def upgrade():
|
||||
### commands auto generated by Alembic - please adjust! ###
|
||||
op.create_sequence('my_sequence_1', **{'schema': None})
|
||||
### end Alembic commands ###
|
||||
|
||||
|
||||
def downgrade():
|
||||
### commands auto generated by Alembic - please adjust! ###
|
||||
op.drop_sequence('my_sequence_1', **{'schema': None})
|
||||
### end Alembic commands ###
|
||||
|
||||
These are our custom directives that will invoke when ``alembic upgrade``
|
||||
or ``alembic downgrade`` is run.
|
||||
|
|
@ -1,44 +0,0 @@
|
|||
.. _alembic.command.toplevel:
|
||||
|
||||
=========
|
||||
Commands
|
||||
=========
|
||||
|
||||
.. note:: this section discusses the **internal API of Alembic**
|
||||
as regards its command invocation system.
|
||||
This section is only useful for developers who wish to extend the
|
||||
capabilities of Alembic. For documentation on using Alembic commands,
|
||||
please see :doc:`/tutorial`.
|
||||
|
||||
Alembic commands are all represented by functions in the :ref:`alembic.command.toplevel`
|
||||
package. They all accept the same style of usage, being sent
|
||||
the :class:`.Config` object as the first argument.
|
||||
|
||||
Commands can be run programmatically, by first constructing a :class:`.Config`
|
||||
object, as in::
|
||||
|
||||
from alembic.config import Config
|
||||
from alembic import command
|
||||
alembic_cfg = Config("/path/to/yourapp/alembic.ini")
|
||||
command.upgrade(alembic_cfg, "head")
|
||||
|
||||
In many cases, and perhaps more often than not, an application will wish
|
||||
to call upon a series of Alembic commands and/or other features. It is
|
||||
usually a good idea to link multiple commands along a single connection
|
||||
and transaction, if feasible. This can be achieved using the
|
||||
:attr:`.Config.attributes` dictionary in order to share a connection::
|
||||
|
||||
with engine.begin() as connection:
|
||||
alembic_cfg.attributes['connection'] = connection
|
||||
command.upgrade(alembic_cfg, "head")
|
||||
|
||||
This recipe requires that ``env.py`` consumes this connection argument;
|
||||
see the example in :ref:`connection_sharing` for details.
|
||||
|
||||
To write small API functions that make direct use of database and script directory
|
||||
information, rather than just running one of the built-in commands,
|
||||
use the :class:`.ScriptDirectory` and :class:`.MigrationContext`
|
||||
classes directly.
|
||||
|
||||
.. automodule:: alembic.command
|
||||
:members:
|
|
@ -1,32 +0,0 @@
|
|||
.. _alembic.config.toplevel:
|
||||
|
||||
==============
|
||||
Configuration
|
||||
==============
|
||||
|
||||
.. note:: this section discusses the **internal API of Alembic** as
|
||||
regards internal configuration constructs.
|
||||
This section is only useful for developers who wish to extend the
|
||||
capabilities of Alembic. For documentation on configuration of
|
||||
an Alembic environment, please see :doc:`/tutorial`.
|
||||
|
||||
The :class:`.Config` object represents the configuration
|
||||
passed to the Alembic environment. From an API usage perspective,
|
||||
it is needed for the following use cases:
|
||||
|
||||
* to create a :class:`.ScriptDirectory`, which allows you to work
|
||||
with the actual script files in a migration environment
|
||||
* to create an :class:`.EnvironmentContext`, which allows you to
|
||||
actually run the ``env.py`` module within the migration environment
|
||||
* to programatically run any of the commands in the :ref:`alembic.command.toplevel`
|
||||
module.
|
||||
|
||||
The :class:`.Config` is *not* needed for these cases:
|
||||
|
||||
* to instantiate a :class:`.MigrationContext` directly - this object
|
||||
only needs a SQLAlchemy connection or dialect name.
|
||||
* to instantiate a :class:`.Operations` object - this object only
|
||||
needs a :class:`.MigrationContext`.
|
||||
|
||||
.. automodule:: alembic.config
|
||||
:members:
|
|
@ -1,56 +0,0 @@
|
|||
.. _alembic.ddl.toplevel:
|
||||
|
||||
=============
|
||||
DDL Internals
|
||||
=============
|
||||
|
||||
These are some of the constructs used to generate migration
|
||||
instructions. The APIs here build off of the :class:`sqlalchemy.schema.DDLElement`
|
||||
and :ref:`sqlalchemy.ext.compiler_toplevel` systems.
|
||||
|
||||
For programmatic usage of Alembic's migration directives, the easiest
|
||||
route is to use the higher level functions given by :ref:`alembic.operations.toplevel`.
|
||||
|
||||
.. automodule:: alembic.ddl
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
.. automodule:: alembic.ddl.base
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
.. automodule:: alembic.ddl.impl
|
||||
:members:
|
||||
:undoc-members:
|
||||
|
||||
MySQL
|
||||
=============
|
||||
|
||||
.. automodule:: alembic.ddl.mysql
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
MS-SQL
|
||||
=============
|
||||
|
||||
.. automodule:: alembic.ddl.mssql
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
Postgresql
|
||||
=============
|
||||
|
||||
.. automodule:: alembic.ddl.postgresql
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
||||
|
||||
SQLite
|
||||
=============
|
||||
|
||||
.. automodule:: alembic.ddl.sqlite
|
||||
:members:
|
||||
:undoc-members:
|
||||
:show-inheritance:
|
|
@ -1,32 +0,0 @@
|
|||
.. _api:
|
||||
|
||||
===========
|
||||
API Details
|
||||
===========
|
||||
|
||||
Alembic's internal API has many public integration points that can be used
|
||||
to extend Alembic's functionality as well as to re-use its functionality
|
||||
in new ways. As the project has grown, more APIs are created and exposed
|
||||
for this purpose.
|
||||
|
||||
Direct use of the vast majority of API details discussed here is not needed
|
||||
for rudimentary use of Alembic; the only API that is used normally by end users is
|
||||
the methods provided by the :class:`.Operations` class, which is discussed
|
||||
outside of this subsection, and the parameters that can be passed to
|
||||
the :meth:`.EnvironmentContext.configure` method, used when configuring
|
||||
one's ``env.py`` environment. However, real-world applications will
|
||||
usually end up using more of the internal API, in particular being able
|
||||
to run commands programmatically, as discussed in the section :doc:`/api/commands`.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
overview
|
||||
runtime
|
||||
config
|
||||
commands
|
||||
operations
|
||||
autogenerate
|
||||
script
|
||||
ddl
|
||||
|
|
@ -1,174 +0,0 @@
|
|||
.. _alembic.operations.toplevel:
|
||||
|
||||
=====================
|
||||
Operation Directives
|
||||
=====================
|
||||
|
||||
.. note:: this section discusses the **internal API of Alembic** as regards
|
||||
the internal system of defining migration operation directives.
|
||||
This section is only useful for developers who wish to extend the
|
||||
capabilities of Alembic. For end-user guidance on Alembic migration
|
||||
operations, please see :ref:`ops`.
|
||||
|
||||
Within migration scripts, actual database migration operations are handled
|
||||
via an instance of :class:`.Operations`. The :class:`.Operations` class
|
||||
lists out available migration operations that are linked to a
|
||||
:class:`.MigrationContext`, which communicates instructions originated
|
||||
by the :class:`.Operations` object into SQL that is sent to a database or SQL
|
||||
output stream.
|
||||
|
||||
Most methods on the :class:`.Operations` class are generated dynamically
|
||||
using a "plugin" system, described in the next section
|
||||
:ref:`operation_plugins`. Additionally, when Alembic migration scripts
|
||||
actually run, the methods on the current :class:`.Operations` object are
|
||||
proxied out to the ``alembic.op`` module, so that they are available
|
||||
using module-style access.
|
||||
|
||||
For an overview of how to use an :class:`.Operations` object directly
|
||||
in programs, as well as for reference to the standard operation methods
|
||||
as well as "batch" methods, see :ref:`ops`.
|
||||
|
||||
.. _operation_plugins:
|
||||
|
||||
Operation Plugins
|
||||
=====================
|
||||
|
||||
The Operations object is extensible using a plugin system. This system
|
||||
allows one to add new ``op.<some_operation>`` methods at runtime. The
|
||||
steps to use this system are to first create a subclass of
|
||||
:class:`.MigrateOperation`, register it using the :meth:`.Operations.register_operation`
|
||||
class decorator, then build a default "implementation" function which is
|
||||
established using the :meth:`.Operations.implementation_for` decorator.
|
||||
|
||||
.. versionadded:: 0.8.0 - the :class:`.Operations` class is now an
|
||||
open namespace that is extensible via the creation of new
|
||||
:class:`.MigrateOperation` subclasses.
|
||||
|
||||
Below we illustrate a very simple operation ``CreateSequenceOp`` which
|
||||
will implement a new method ``op.create_sequence()`` for use in
|
||||
migration scripts::
|
||||
|
||||
from alembic.operations import Operations, MigrateOperation
|
||||
|
||||
@Operations.register_operation("create_sequence")
|
||||
class CreateSequenceOp(MigrateOperation):
|
||||
"""Create a SEQUENCE."""
|
||||
|
||||
def __init__(self, sequence_name, schema=None):
|
||||
self.sequence_name = sequence_name
|
||||
self.schema = schema
|
||||
|
||||
@classmethod
|
||||
def create_sequence(cls, operations, sequence_name, **kw):
|
||||
"""Issue a "CREATE SEQUENCE" instruction."""
|
||||
|
||||
op = CreateSequenceOp(sequence_name, **kw)
|
||||
return operations.invoke(op)
|
||||
|
||||
def reverse(self):
|
||||
# only needed to support autogenerate
|
||||
return DropSequenceOp(self.sequence_name, schema=self.schema)
|
||||
|
||||
@Operations.register_operation("drop_sequence")
|
||||
class DropSequenceOp(MigrateOperation):
|
||||
"""Drop a SEQUENCE."""
|
||||
|
||||
def __init__(self, sequence_name, schema=None):
|
||||
self.sequence_name = sequence_name
|
||||
self.schema = schema
|
||||
|
||||
@classmethod
|
||||
def drop_sequence(cls, operations, sequence_name, **kw):
|
||||
"""Issue a "DROP SEQUENCE" instruction."""
|
||||
|
||||
op = DropSequenceOp(sequence_name, **kw)
|
||||
return operations.invoke(op)
|
||||
|
||||
def reverse(self):
|
||||
# only needed to support autogenerate
|
||||
return CreateSequenceOp(self.sequence_name, schema=self.schema)
|
||||
|
||||
Above, the ``CreateSequenceOp`` and ``DropSequenceOp`` classes represent
|
||||
new operations that will
|
||||
be available as ``op.create_sequence()`` and ``op.drop_sequence()``.
|
||||
The reason the operations
|
||||
are represented as stateful classes is so that an operation and a specific
|
||||
set of arguments can be represented generically; the state can then correspond
|
||||
to different kinds of operations, such as invoking the instruction against
|
||||
a database, or autogenerating Python code for the operation into a
|
||||
script.
|
||||
|
||||
In order to establish the migrate-script behavior of the new operations,
|
||||
we use the :meth:`.Operations.implementation_for` decorator::
|
||||
|
||||
@Operations.implementation_for(CreateSequenceOp)
|
||||
def create_sequence(operations, operation):
|
||||
if operation.schema is not None:
|
||||
name = "%s.%s" % (operation.schema, operation.sequence_name)
|
||||
else:
|
||||
name = operation.sequence_name
|
||||
operations.execute("CREATE SEQUENCE %s" % name)
|
||||
|
||||
|
||||
@Operations.implementation_for(DropSequenceOp)
|
||||
def drop_sequence(operations, operation):
|
||||
if operation.schema is not None:
|
||||
name = "%s.%s" % (operation.schema, operation.sequence_name)
|
||||
else:
|
||||
name = operation.sequence_name
|
||||
operations.execute("DROP SEQUENCE %s" % name)
|
||||
|
||||
Above, we use the simplest possible technique of invoking our DDL, which
|
||||
is just to call :meth:`.Operations.execute` with literal SQL. If this is
|
||||
all a custom operation needs, then this is fine. However, options for
|
||||
more comprehensive support include building out a custom SQL construct,
|
||||
as documented at :ref:`sqlalchemy.ext.compiler_toplevel`.
|
||||
|
||||
With the above two steps, a migration script can now use new methods
|
||||
``op.create_sequence()`` and ``op.drop_sequence()`` that will proxy to
|
||||
our object as a classmethod::
|
||||
|
||||
def upgrade():
|
||||
op.create_sequence("my_sequence")
|
||||
|
||||
def downgrade():
|
||||
op.drop_sequence("my_sequence")
|
||||
|
||||
The registration of new operations only needs to occur in time for the
|
||||
``env.py`` script to invoke :meth:`.MigrationContext.run_migrations`;
|
||||
within the module level of the ``env.py`` script is sufficient.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`autogen_custom_ops` - how to add autogenerate support to
|
||||
custom operations.
|
||||
|
||||
.. versionadded:: 0.8 - the migration operations available via the
|
||||
:class:`.Operations` class as well as the ``alembic.op`` namespace
|
||||
is now extensible using a plugin system.
|
||||
|
||||
|
||||
.. _operation_objects:
|
||||
.. _alembic.operations.ops.toplevel:
|
||||
|
||||
Built-in Operation Objects
|
||||
==============================
|
||||
|
||||
The migration operations present on :class:`.Operations` are themselves
|
||||
delivered via operation objects that represent an operation and its
|
||||
arguments. All operations descend from the :class:`.MigrateOperation`
|
||||
class, and are registered with the :class:`.Operations` class using
|
||||
the :meth:`.Operations.register_operation` class decorator. The
|
||||
:class:`.MigrateOperation` objects also serve as the basis for how the
|
||||
autogenerate system renders new migration scripts.
|
||||
|
||||
.. seealso::
|
||||
|
||||
:ref:`operation_plugins`
|
||||
|
||||
:ref:`customizing_revision`
|
||||
|
||||
The built-in operation objects are listed below.
|
||||
|
||||
.. automodule:: alembic.operations.ops
|
||||
:members:
|
|
@ -1,62 +0,0 @@
|
|||
========
|
||||
Overview
|
||||
========
|
||||
|
||||
.. note:: this section is a technical overview of the
|
||||
**internal API of Alembic**.
|
||||
This section is only useful for developers who wish to extend the
|
||||
capabilities of Alembic; for regular users, reading this section
|
||||
is **not necessary**.
|
||||
|
||||
A visualization of the primary features of Alembic's internals is presented
|
||||
in the following figure. The module and class boxes do not list out
|
||||
all the operations provided by each unit; only a small set of representative
|
||||
elements intended to convey the primary purpose of each system.
|
||||
|
||||
.. image:: api_overview.png
|
||||
|
||||
The script runner for Alembic is present in the :ref:`alembic.config.toplevel` module.
|
||||
This module produces a :class:`.Config` object and passes it to the
|
||||
appropriate function in :ref:`alembic.command.toplevel`. Functions within
|
||||
:ref:`alembic.command.toplevel` will typically instantiate an
|
||||
:class:`.ScriptDirectory` instance, which represents the collection of
|
||||
version files, and an :class:`.EnvironmentContext`, which is a configurational
|
||||
facade passed to the environment's ``env.py`` script.
|
||||
|
||||
The :class:`.EnvironmentContext` object is the primary object used within
|
||||
the ``env.py`` script, whose main purpose is that of a facade for creating and using
|
||||
a :class:`.MigrationContext` object, which is the actual migration engine
|
||||
that refers to a database implementation. The primary method called
|
||||
on this object within an ``env.py`` script is the
|
||||
:meth:`.EnvironmentContext.configure` method, which sets up the
|
||||
:class:`.MigrationContext` with database connectivity and behavioral
|
||||
configuration. It also supplies methods for transaction demarcation and
|
||||
migration running, but these methods ultimately call upon the
|
||||
:class:`.MigrationContext` that's been configured.
|
||||
|
||||
:class:`.MigrationContext` is the gateway to the database
|
||||
for other parts of the application, and produces a :class:`.DefaultImpl`
|
||||
object which does the actual database communication, and knows how to
|
||||
create the specific SQL text of the various DDL directives such as
|
||||
ALTER TABLE; :class:`.DefaultImpl` has subclasses that are per-database-backend.
|
||||
In "offline" mode (e.g. ``--sql``), the :class:`.MigrationContext` will
|
||||
produce SQL to a file output stream instead of a database.
|
||||
|
||||
During an upgrade or downgrade operation, a specific series of migration
|
||||
scripts are invoked starting with the :class:`.MigrationContext` in conjunction
|
||||
with the :class:`.ScriptDirectory`; the actual scripts themselves make use
|
||||
of the :class:`.Operations` object, which provide the end-user interface to
|
||||
specific database operations. The :class:`.Operations` object is generated
|
||||
based on a series of "operation directive" objects that are user-extensible,
|
||||
and start out in the :ref:`alembic.operations.ops.toplevel` module.
|
||||
|
||||
Another prominent feature of Alembic is the "autogenerate" feature, which
|
||||
produces new migration scripts that contain Python code. The autogenerate
|
||||
feature starts in :ref:`alembic.autogenerate.toplevel`, and is used exclusively
|
||||
by the :func:`.alembic.command.revision` command when the ``--autogenerate``
|
||||
flag is passed. Autogenerate refers to the :class:`.MigrationContext`
|
||||
and :class:`.DefaultImpl` in order to access database connectivity and
|
||||
access per-backend rules for autogenerate comparisons. It also makes use
|
||||
of :ref:`alembic.operations.ops.toplevel` in order to represent the operations that
|
||||
it will render into scripts.
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
.. _alembic.runtime.environment.toplevel:
|
||||
|
||||
=======================
|
||||
Runtime Objects
|
||||
=======================
|
||||
|
||||
The "runtime" of Alembic involves the :class:`.EnvironmentContext`
|
||||
and :class:`.MigrationContext` objects. These are the objects that are
|
||||
in play once the ``env.py`` script is loaded up by a command and
|
||||
a migration operation proceeds.
|
||||
|
||||
The Environment Context
|
||||
=======================
|
||||
|
||||
The :class:`.EnvironmentContext` class provides most of the
|
||||
API used within an ``env.py`` script. Within ``env.py``,
|
||||
the instantated :class:`.EnvironmentContext` is made available
|
||||
via a special *proxy module* called ``alembic.context``. That is,
|
||||
you can import ``alembic.context`` like a regular Python module,
|
||||
and each name you call upon it is ultimately routed towards the
|
||||
current :class:`.EnvironmentContext` in use.
|
||||
|
||||
In particular, the key method used within ``env.py`` is :meth:`.EnvironmentContext.configure`,
|
||||
which establishes all the details about how the database will be accessed.
|
||||
|
||||
.. automodule:: alembic.runtime.environment
|
||||
:members: EnvironmentContext
|
||||
|
||||
.. _alembic.runtime.migration.toplevel:
|
||||
|
||||
The Migration Context
|
||||
=====================
|
||||
|
||||
The :class:`.MigrationContext` handles the actual work to be performed
|
||||
against a database backend as migration operations proceed. It is generally
|
||||
not exposed to the end-user, except when the
|
||||
:paramref:`~.EnvironmentContext.configure.on_version_apply` callback hook is used.
|
||||
|
||||
.. automodule:: alembic.runtime.migration
|
||||
:members: MigrationContext
|
|
@ -1,20 +0,0 @@
|
|||
.. _alembic.script.toplevel:
|
||||
|
||||
================
|
||||
Script Directory
|
||||
================
|
||||
|
||||
The :class:`.ScriptDirectory` object provides programmatic access
|
||||
to the Alembic version files present in the filesystem.
|
||||
|
||||
.. automodule:: alembic.script
|
||||
:members:
|
||||
|
||||
Revision
|
||||
========
|
||||
|
||||
The :class:`.RevisionMap` object serves as the basis for revision
|
||||
management, used exclusively by :class:`.ScriptDirectory`.
|
||||
|
||||
.. automodule:: alembic.script.revision
|
||||
:members:
|
File diff suppressed because it is too large
Load Diff
|
@ -1,462 +0,0 @@
|
|||
Auto Generating Migrations
|
||||
===========================
|
||||
|
||||
Alembic can view the status of the database and compare against the table metadata
|
||||
in the application, generating the "obvious" migrations based on a comparison. This
|
||||
is achieved using the ``--autogenerate`` option to the ``alembic revision`` command,
|
||||
which places so-called *candidate* migrations into our new migrations file. We
|
||||
review and modify these by hand as needed, then proceed normally.
|
||||
|
||||
To use autogenerate, we first need to modify our ``env.py`` so that it gets access
|
||||
to a table metadata object that contains the target. Suppose our application
|
||||
has a `declarative base <http://www.sqlalchemy.org/docs/orm/extensions/declarative.html#synopsis>`_
|
||||
in ``myapp.mymodel``. This base contains a :class:`~sqlalchemy.schema.MetaData` object which
|
||||
contains :class:`~sqlalchemy.schema.Table` objects defining our database. We make sure this
|
||||
is loaded in ``env.py`` and then passed to :meth:`.EnvironmentContext.configure` via the
|
||||
``target_metadata`` argument. The ``env.py`` sample script used in the
|
||||
generic template already has a
|
||||
variable declaration near the top for our convenience, where we replace ``None``
|
||||
with our :class:`~sqlalchemy.schema.MetaData`. Starting with::
|
||||
|
||||
# add your model's MetaData object here
|
||||
# for 'autogenerate' support
|
||||
# from myapp import mymodel
|
||||
# target_metadata = mymodel.Base.metadata
|
||||
target_metadata = None
|
||||
|
||||
we change to::
|
||||
|
||||
from myapp.mymodel import Base
|
||||
target_metadata = Base.metadata
|
||||
|
||||
.. note::
|
||||
|
||||
The above example refers to the **generic alembic env.py template**, e.g.
|
||||
the one created by default when calling upon ``alembic init``, and not
|
||||
the special-use templates such as ``multidb``. Please consult the source
|
||||
code and comments within the ``env.py`` script directly for specific
|
||||
guidance on where and how the autogenerate metadata is established.
|
||||
|
||||
If we look later in the script, down in ``run_migrations_online()``,
|
||||
we can see the directive passed to :meth:`.EnvironmentContext.configure`::
|
||||
|
||||
def run_migrations_online():
|
||||
engine = engine_from_config(
|
||||
config.get_section(config.config_ini_section), prefix='sqlalchemy.')
|
||||
|
||||
with engine.connect() as connection:
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
We can then use the ``alembic revision`` command in conjunction with the
|
||||
``--autogenerate`` option. Suppose
|
||||
our :class:`~sqlalchemy.schema.MetaData` contained a definition for the ``account`` table,
|
||||
and the database did not. We'd get output like::
|
||||
|
||||
$ alembic revision --autogenerate -m "Added account table"
|
||||
INFO [alembic.context] Detected added table 'account'
|
||||
Generating /path/to/foo/alembic/versions/27c6a30d7c24.py...done
|
||||
|
||||
We can then view our file ``27c6a30d7c24.py`` and see that a rudimentary migration
|
||||
is already present::
|
||||
|
||||
"""empty message
|
||||
|
||||
Revision ID: 27c6a30d7c24
|
||||
Revises: None
|
||||
Create Date: 2011-11-08 11:40:27.089406
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '27c6a30d7c24'
|
||||
down_revision = None
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
def upgrade():
|
||||
### commands auto generated by Alembic - please adjust! ###
|
||||
op.create_table(
|
||||
'account',
|
||||
sa.Column('id', sa.Integer()),
|
||||
sa.Column('name', sa.String(length=50), nullable=False),
|
||||
sa.Column('description', sa.VARCHAR(200)),
|
||||
sa.Column('last_transaction_date', sa.DateTime()),
|
||||
sa.PrimaryKeyConstraint('id')
|
||||
)
|
||||
### end Alembic commands ###
|
||||
|
||||
def downgrade():
|
||||
### commands auto generated by Alembic - please adjust! ###
|
||||
op.drop_table("account")
|
||||
### end Alembic commands ###
|
||||
|
||||
The migration hasn't actually run yet, of course. We do that via the usual ``upgrade``
|
||||
command. We should also go into our migration file and alter it as needed, including
|
||||
adjustments to the directives as well as the addition of other directives which these may
|
||||
be dependent on - specifically data changes in between creates/alters/drops.
|
||||
|
||||
What does Autogenerate Detect (and what does it *not* detect?)
|
||||
--------------------------------------------------------------
|
||||
|
||||
The vast majority of user issues with Alembic centers on the topic of what
|
||||
kinds of changes autogenerate can and cannot detect reliably, as well as
|
||||
how it renders Python code for what it does detect. it is critical to
|
||||
note that **autogenerate is not intended to be perfect**. It is *always*
|
||||
necessary to manually review and correct the **candidate migrations**
|
||||
that autogenererate produces. The feature is getting more and more
|
||||
comprehensive and error-free as releases continue, but one should take
|
||||
note of the current limitations.
|
||||
|
||||
Autogenerate **will detect**:
|
||||
|
||||
* Table additions, removals.
|
||||
* Column additions, removals.
|
||||
* Change of nullable status on columns.
|
||||
* Basic changes in indexes and explcitly-named unique constraints
|
||||
|
||||
.. versionadded:: 0.6.1 Support for autogenerate of indexes and unique constraints.
|
||||
|
||||
* Basic changes in foreign key constraints
|
||||
|
||||
.. versionadded:: 0.7.1 Support for autogenerate of foreign key constraints.
|
||||
|
||||
Autogenerate can **optionally detect**:
|
||||
|
||||
* Change of column type. This will occur if you set
|
||||
the :paramref:`.EnvironmentContext.configure.compare_type` parameter
|
||||
to ``True``, or to a custom callable function.
|
||||
The feature works well in most cases,
|
||||
but is off by default so that it can be tested on the target schema
|
||||
first. It can also be customized by passing a callable here; see the
|
||||
section :ref:`compare_types` for details.
|
||||
* Change of server default. This will occur if you set
|
||||
the :paramref:`.EnvironmentContext.configure.compare_server_default`
|
||||
parameter to ``True``, or to a custom callable function.
|
||||
This feature works well for simple cases but cannot always produce
|
||||
accurate results. The Postgresql backend will actually invoke
|
||||
the "detected" and "metadata" values against the database to
|
||||
determine equivalence. The feature is off by default so that
|
||||
it can be tested on the target schema first. Like type comparison,
|
||||
it can also be customized by passing a callable; see the
|
||||
function's documentation for details.
|
||||
|
||||
Autogenerate **can not detect**:
|
||||
|
||||
* Changes of table name. These will come out as an add/drop of two different
|
||||
tables, and should be hand-edited into a name change instead.
|
||||
* Changes of column name. Like table name changes, these are detected as
|
||||
a column add/drop pair, which is not at all the same as a name change.
|
||||
* Anonymously named constraints. Give your constraints a name,
|
||||
e.g. ``UniqueConstraint('col1', 'col2', name="my_name")``. See the section
|
||||
:doc:`naming` for background on how to configure automatic naming schemes
|
||||
for constraints.
|
||||
* Special SQLAlchemy types such as :class:`~sqlalchemy.types.Enum` when generated
|
||||
on a backend which doesn't support ENUM directly - this because the
|
||||
representation of such a type
|
||||
in the non-supporting database, i.e. a CHAR+ CHECK constraint, could be
|
||||
any kind of CHAR+CHECK. For SQLAlchemy to determine that this is actually
|
||||
an ENUM would only be a guess, something that's generally a bad idea.
|
||||
To implement your own "guessing" function here, use the
|
||||
:meth:`sqlalchemy.events.DDLEvents.column_reflect` event
|
||||
to detect when a CHAR (or whatever the target type is) is reflected,
|
||||
and change it to an ENUM (or whatever type is desired) if it is known that
|
||||
that's the intent of the type. The
|
||||
:meth:`sqlalchemy.events.DDLEvents.after_parent_attach`
|
||||
can be used within the autogenerate process to intercept and un-attach
|
||||
unwanted CHECK constraints.
|
||||
|
||||
Autogenerate can't currently, but **will eventually detect**:
|
||||
|
||||
* Some free-standing constraint additions and removals may not be supported,
|
||||
including PRIMARY KEY, EXCLUDE, CHECK; these are not necessarily implemented
|
||||
within the autogenerate detection system and also may not be supported by
|
||||
the supporting SQLAlchemy dialect.
|
||||
* Sequence additions, removals - not yet implemented.
|
||||
|
||||
Autogenerating Multiple MetaData collections
|
||||
--------------------------------------------
|
||||
|
||||
The ``target_metadata`` collection may also be defined as a sequence
|
||||
if an application has multiple :class:`~sqlalchemy.schema.MetaData`
|
||||
collections involved::
|
||||
|
||||
from myapp.mymodel1 import Model1Base
|
||||
from myapp.mymodel2 import Model2Base
|
||||
target_metadata = [Model1Base.metadata, Model2Base.metadata]
|
||||
|
||||
The sequence of :class:`~sqlalchemy.schema.MetaData` collections will be
|
||||
consulted in order during the autogenerate process. Note that each
|
||||
:class:`~sqlalchemy.schema.MetaData` must contain **unique** table keys
|
||||
(e.g. the "key" is the combination of the table's name and schema);
|
||||
if two :class:`~sqlalchemy.schema.MetaData` objects contain a table
|
||||
with the same schema/name combination, an error is raised.
|
||||
|
||||
.. versionchanged:: 0.9.0 the
|
||||
:paramref:`.EnvironmentContext.configure.target_metadata`
|
||||
parameter may now be passed a sequence of
|
||||
:class:`~sqlalchemy.schema.MetaData` objects to support
|
||||
autogeneration of multiple :class:`~sqlalchemy.schema.MetaData`
|
||||
collections.
|
||||
|
||||
Comparing and Rendering Types
|
||||
------------------------------
|
||||
|
||||
The area of autogenerate's behavior of comparing and rendering Python-based type objects
|
||||
in migration scripts presents a challenge, in that there's
|
||||
a very wide variety of types to be rendered in scripts, including those
|
||||
part of SQLAlchemy as well as user-defined types. A few options
|
||||
are given to help out with this task.
|
||||
|
||||
.. _autogen_module_prefix:
|
||||
|
||||
Controlling the Module Prefix
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When types are rendered, they are generated with a **module prefix**, so
|
||||
that they are available based on a relatively small number of imports.
|
||||
The rules for what the prefix is is based on the kind of datatype as well
|
||||
as configurational settings. For example, when Alembic renders SQLAlchemy
|
||||
types, it will by default prefix the type name with the prefix ``sa.``::
|
||||
|
||||
Column("my_column", sa.Integer())
|
||||
|
||||
The use of the ``sa.`` prefix is controllable by altering the value
|
||||
of :paramref:`.EnvironmentContext.configure.sqlalchemy_module_prefix`::
|
||||
|
||||
def run_migrations_online():
|
||||
# ...
|
||||
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
sqlalchemy_module_prefix="sqla.",
|
||||
# ...
|
||||
)
|
||||
|
||||
# ...
|
||||
|
||||
In either case, the ``sa.`` prefix, or whatever prefix is desired, should
|
||||
also be included in the imports section of ``script.py.mako``; it also
|
||||
defaults to ``import sqlalchemy as sa``.
|
||||
|
||||
|
||||
For user-defined types, that is, any custom type that
|
||||
is not within the ``sqlalchemy.`` module namespace, by default Alembic will
|
||||
use the **value of __module__ for the custom type**::
|
||||
|
||||
Column("my_column", myapp.models.utils.types.MyCustomType())
|
||||
|
||||
The imports for the above type again must be made present within the migration,
|
||||
either manually, or by adding it to ``script.py.mako``.
|
||||
|
||||
.. versionchanged:: 0.7.0
|
||||
The default module prefix rendering for a user-defined type now makes use
|
||||
of the type's ``__module__`` attribute to retrieve the prefix, rather than
|
||||
using the value of
|
||||
:paramref:`~.EnvironmentContext.configure.sqlalchemy_module_prefix`.
|
||||
|
||||
|
||||
The above custom type has a long and cumbersome name based on the use
|
||||
of ``__module__`` directly, which also implies that lots of imports would
|
||||
be needed in order to accomodate lots of types. For this reason, it is
|
||||
recommended that user-defined types used in migration scripts be made
|
||||
available from a single module. Suppose we call it ``myapp.migration_types``::
|
||||
|
||||
# myapp/migration_types.py
|
||||
|
||||
from myapp.models.utils.types import MyCustomType
|
||||
|
||||
We can first add an import for ``migration_types`` to our ``script.py.mako``::
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
import myapp.migration_types
|
||||
${imports if imports else ""}
|
||||
|
||||
We then override Alembic's use of ``__module__`` by providing a fixed
|
||||
prefix, using the :paramref:`.EnvironmentContext.configure.user_module_prefix`
|
||||
option::
|
||||
|
||||
def run_migrations_online():
|
||||
# ...
|
||||
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
user_module_prefix="myapp.migration_types.",
|
||||
# ...
|
||||
)
|
||||
|
||||
# ...
|
||||
|
||||
Above, we now would get a migration like::
|
||||
|
||||
Column("my_column", myapp.migration_types.MyCustomType())
|
||||
|
||||
Now, when we inevitably refactor our application to move ``MyCustomType``
|
||||
somewhere else, we only need modify the ``myapp.migration_types`` module,
|
||||
instead of searching and replacing all instances within our migration scripts.
|
||||
|
||||
.. versionadded:: 0.6.3 Added :paramref:`.EnvironmentContext.configure.user_module_prefix`.
|
||||
|
||||
.. _autogen_render_types:
|
||||
|
||||
Affecting the Rendering of Types Themselves
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The methodology Alembic uses to generate SQLAlchemy and user-defined type constructs
|
||||
as Python code is plain old ``__repr__()``. SQLAlchemy's built-in types
|
||||
for the most part have a ``__repr__()`` that faithfully renders a
|
||||
Python-compatible constructor call, but there are some exceptions, particularly
|
||||
in those cases when a constructor accepts arguments that aren't compatible
|
||||
with ``__repr__()``, such as a pickling function.
|
||||
|
||||
When building a custom type that will be rendered into a migration script,
|
||||
it is often necessary to explicitly give the type a ``__repr__()`` that will
|
||||
faithfully reproduce the constructor for that type. This, in combination
|
||||
with :paramref:`.EnvironmentContext.configure.user_module_prefix`, is usually
|
||||
enough. However, if additional behaviors are needed, a more comprehensive
|
||||
hook is the :paramref:`.EnvironmentContext.configure.render_item` option.
|
||||
This hook allows one to provide a callable function within ``env.py`` that will fully take
|
||||
over how a type is rendered, including its module prefix::
|
||||
|
||||
def render_item(type_, obj, autogen_context):
|
||||
"""Apply custom rendering for selected items."""
|
||||
|
||||
if type_ == 'type' and isinstance(obj, MySpecialType):
|
||||
return "mypackage.%r" % obj
|
||||
|
||||
# default rendering for other objects
|
||||
return False
|
||||
|
||||
def run_migrations_online():
|
||||
# ...
|
||||
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
render_item=render_item,
|
||||
# ...
|
||||
)
|
||||
|
||||
# ...
|
||||
|
||||
In the above example, we'd ensure our ``MySpecialType`` includes an appropriate
|
||||
``__repr__()`` method, which is invoked when we call it against ``"%r"``.
|
||||
|
||||
The callable we use for :paramref:`.EnvironmentContext.configure.render_item`
|
||||
can also add imports to our migration script. The :class:`.AutogenContext` passed in
|
||||
contains a datamember called :attr:`.AutogenContext.imports`, which is a Python
|
||||
``set()`` for which we can add new imports. For example, if ``MySpecialType``
|
||||
were in a module called ``mymodel.types``, we can add the import for it
|
||||
as we encounter the type::
|
||||
|
||||
def render_item(type_, obj, autogen_context):
|
||||
"""Apply custom rendering for selected items."""
|
||||
|
||||
if type_ == 'type' and isinstance(obj, MySpecialType):
|
||||
# add import for this type
|
||||
autogen_context.imports.add("from mymodel import types")
|
||||
return "types.%r" % obj
|
||||
|
||||
# default rendering for other objects
|
||||
return False
|
||||
|
||||
.. versionchanged:: 0.8 The ``autogen_context`` data member passed to
|
||||
the ``render_item`` callable is now an instance of :class:`.AutogenContext`.
|
||||
|
||||
.. versionchanged:: 0.8.3 The "imports" data member of the autogen context
|
||||
is restored to the new :class:`.AutogenContext` object as
|
||||
:attr:`.AutogenContext.imports`.
|
||||
|
||||
The finished migration script will include our imports where the
|
||||
``${imports}`` expression is used, producing output such as::
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
from mymodel import types
|
||||
|
||||
def upgrade():
|
||||
op.add_column('sometable', Column('mycolumn', types.MySpecialType()))
|
||||
|
||||
|
||||
.. _compare_types:
|
||||
|
||||
Comparing Types
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
The default type comparison logic will work for SQLAlchemy built in types as
|
||||
well as basic user defined types. This logic is only enabled if the
|
||||
:paramref:`.EnvironmentContext.configure.compare_type` parameter
|
||||
is set to True::
|
||||
|
||||
context.configure(
|
||||
# ...
|
||||
compare_type = True
|
||||
)
|
||||
|
||||
Alternatively, the :paramref:`.EnvironmentContext.configure.compare_type`
|
||||
parameter accepts a callable function which may be used to implement custom type
|
||||
comparison logic, for cases such as where special user defined types
|
||||
are being used::
|
||||
|
||||
def my_compare_type(context, inspected_column,
|
||||
metadata_column, inspected_type, metadata_type):
|
||||
# return True if the types are different,
|
||||
# False if not, or None to allow the default implementation
|
||||
# to compare these types
|
||||
return None
|
||||
|
||||
context.configure(
|
||||
# ...
|
||||
compare_type = my_compare_type
|
||||
)
|
||||
|
||||
Above, ``inspected_column`` is a :class:`sqlalchemy.schema.Column` as
|
||||
returned by
|
||||
:meth:`sqlalchemy.engine.reflection.Inspector.reflecttable`, whereas
|
||||
``metadata_column`` is a :class:`sqlalchemy.schema.Column` from the
|
||||
local model environment. A return value of ``None`` indicates that default
|
||||
type comparison to proceed.
|
||||
|
||||
Additionally, custom types that are part of imported or third party
|
||||
packages which have special behaviors such as per-dialect behavior
|
||||
should implement a method called ``compare_against_backend()``
|
||||
on their SQLAlchemy type. If this method is present, it will be called
|
||||
where it can also return True or False to specify the types compare as
|
||||
equivalent or not; if it returns None, default type comparison logic
|
||||
will proceed::
|
||||
|
||||
class MySpecialType(TypeDecorator):
|
||||
|
||||
# ...
|
||||
|
||||
def compare_against_backend(self, dialect, conn_type):
|
||||
# return True if the types are different,
|
||||
# False if not, or None to allow the default implementation
|
||||
# to compare these types
|
||||
if dialect.name == 'postgresql':
|
||||
return isinstance(conn_type, postgresql.UUID)
|
||||
else:
|
||||
return isinstance(conn_type, String)
|
||||
|
||||
The order of precedence regarding the
|
||||
:paramref:`.EnvironmentContext.configure.compare_type` callable vs. the
|
||||
type itself implementing ``compare_against_backend`` is that the
|
||||
:paramref:`.EnvironmentContext.configure.compare_type` callable is favored
|
||||
first; if it returns ``None``, then the ``compare_against_backend`` method
|
||||
will be used, if present on the metadata type. If that returns ``None``,
|
||||
then a basic check for type equivalence is run.
|
||||
|
||||
.. versionadded:: 0.7.6 - added support for the ``compare_against_backend()``
|
||||
method.
|
||||
|
||||
|
||||
|
|
@ -1,375 +0,0 @@
|
|||
.. _batch_migrations:
|
||||
|
||||
Running "Batch" Migrations for SQLite and Other Databases
|
||||
=========================================================
|
||||
|
||||
.. note:: "Batch mode" for SQLite and other databases is a new and intricate
|
||||
feature within the 0.7.0 series of Alembic, and should be
|
||||
considered as "beta" for the next several releases.
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
The SQLite database presents a challenge to migration tools
|
||||
in that it has almost no support for the ALTER statement upon which
|
||||
relational schema migrations rely upon. The rationale for this stems from
|
||||
philosophical and architectural concerns within SQLite, and they are unlikely
|
||||
to be changed.
|
||||
|
||||
Migration tools are instead expected to produce copies of SQLite tables that
|
||||
correspond to the new structure, transfer the data from the existing
|
||||
table to the new one, then drop the old table. For our purposes here
|
||||
we'll call this **"move and copy"** workflow, and in order to accommodate it
|
||||
in a way that is reasonably predictable, while also remaining compatible
|
||||
with other databases, Alembic provides the **batch** operations context.
|
||||
|
||||
Within this context, a relational table is named, and then a series of
|
||||
mutation operations to that table alone are specified within
|
||||
the block. When the context is complete, a process begins whereby the
|
||||
"move and copy" procedure begins; the existing table structure is reflected
|
||||
from the database, a new version of this table is created with the given
|
||||
changes, data is copied from the
|
||||
old table to the new table using "INSERT from SELECT", and finally the old
|
||||
table is dropped and the new one renamed to the original name.
|
||||
|
||||
The :meth:`.Operations.batch_alter_table` method provides the gateway to this
|
||||
process::
|
||||
|
||||
with op.batch_alter_table("some_table") as batch_op:
|
||||
batch_op.add_column(Column('foo', Integer))
|
||||
batch_op.drop_column('bar')
|
||||
|
||||
When the above directives are invoked within a migration script, on a
|
||||
SQLite backend we would see SQL like:
|
||||
|
||||
.. sourcecode:: sql
|
||||
|
||||
CREATE TABLE _alembic_batch_temp (
|
||||
id INTEGER NOT NULL,
|
||||
foo INTEGER,
|
||||
PRIMARY KEY (id)
|
||||
);
|
||||
INSERT INTO _alembic_batch_temp (id) SELECT some_table.id FROM some_table;
|
||||
DROP TABLE some_table;
|
||||
ALTER TABLE _alembic_batch_temp RENAME TO some_table;
|
||||
|
||||
On other backends, we'd see the usual ``ALTER`` statements done as though
|
||||
there were no batch directive - the batch context by default only does
|
||||
the "move and copy" process if SQLite is in use, and if there are
|
||||
migration directives other than :meth:`.Operations.add_column` present,
|
||||
which is the one kind of column-level ALTER statement that SQLite supports.
|
||||
:meth:`.Operations.batch_alter_table` can be configured
|
||||
to run "move and copy" unconditionally in all cases, including on databases
|
||||
other than SQLite; more on this is below.
|
||||
|
||||
.. _batch_controlling_table_reflection:
|
||||
|
||||
Controlling Table Reflection
|
||||
----------------------------
|
||||
|
||||
The :class:`~sqlalchemy.schema.Table` object that is reflected when
|
||||
"move and copy" proceeds is performed using the standard ``autoload=True``
|
||||
approach. This call can be affected using the
|
||||
:paramref:`~.Operations.batch_alter_table.reflect_args` and
|
||||
:paramref:`~.Operations.batch_alter_table.reflect_kwargs` arguments.
|
||||
For example, to override a :class:`~sqlalchemy.schema.Column` within
|
||||
the reflection process such that a :class:`~sqlalchemy.types.Boolean`
|
||||
object is reflected with the ``create_constraint`` flag set to ``False``::
|
||||
|
||||
with self.op.batch_alter_table(
|
||||
"bar",
|
||||
reflect_args=[Column('flag', Boolean(create_constraint=False))]
|
||||
) as batch_op:
|
||||
batch_op.alter_column(
|
||||
'flag', new_column_name='bflag', existing_type=Boolean)
|
||||
|
||||
Another use case, add a listener to the :class:`~sqlalchemy.schema.Table`
|
||||
as it is reflected so that special logic can be applied to columns or
|
||||
types, using the :meth:`~sqlalchemy.events.DDLEvents.column_reflect` event::
|
||||
|
||||
def listen_for_reflect(inspector, table, column_info):
|
||||
"correct an ENUM type"
|
||||
if column_info['name'] == 'my_enum':
|
||||
column_info['type'] = Enum('a', 'b', 'c')
|
||||
|
||||
with self.op.batch_alter_table(
|
||||
"bar",
|
||||
reflect_kwargs=dict(
|
||||
listeners=[
|
||||
('column_reflect', listen_for_reflect)
|
||||
]
|
||||
)
|
||||
) as batch_op:
|
||||
batch_op.alter_column(
|
||||
'flag', new_column_name='bflag', existing_type=Boolean)
|
||||
|
||||
The reflection process may also be bypassed entirely by sending a
|
||||
pre-fabricated :class:`~sqlalchemy.schema.Table` object; see
|
||||
:ref:`batch_offline_mode` for an example.
|
||||
|
||||
.. versionadded:: 0.7.1
|
||||
added :paramref:`.Operations.batch_alter_table.reflect_args`
|
||||
and :paramref:`.Operations.batch_alter_table.reflect_kwargs` options.
|
||||
|
||||
.. _sqlite_batch_constraints:
|
||||
|
||||
Dealing with Constraints
|
||||
------------------------
|
||||
|
||||
There are a variety of issues when using "batch" mode with constraints,
|
||||
such as FOREIGN KEY, CHECK and UNIQUE constraints. This section
|
||||
will attempt to detail many of these scenarios.
|
||||
|
||||
.. _dropping_sqlite_foreign_keys:
|
||||
|
||||
Dropping Unnamed or Named Foreign Key Constraints
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
SQLite, unlike any other database, allows constraints to exist in the
|
||||
database that have no identifying name. On all other backends, the
|
||||
target database will always generate some kind of name, if one is not
|
||||
given.
|
||||
|
||||
The first challenge this represents is that an unnamed constraint can't
|
||||
by itself be targeted by the :meth:`.BatchOperations.drop_constraint` method.
|
||||
An unnamed FOREIGN KEY constraint is implicit whenever the
|
||||
:class:`~sqlalchemy.schema.ForeignKey`
|
||||
or :class:`~sqlalchemy.schema.ForeignKeyConstraint` objects are used without
|
||||
passing them a name. Only on SQLite will these constraints remain entirely
|
||||
unnamed when they are created on the target database; an automatically generated
|
||||
name will be assigned in the case of all other database backends.
|
||||
|
||||
A second issue is that SQLAlchemy itself has inconsistent behavior in
|
||||
dealing with SQLite constraints as far as names. Prior to version 1.0,
|
||||
SQLAlchemy omits the name of foreign key constraints when reflecting them
|
||||
against the SQLite backend. So even if the target application has gone through
|
||||
the steps to apply names to the constraints as stated in the database,
|
||||
they still aren't targetable within the batch reflection process prior
|
||||
to SQLAlchemy 1.0.
|
||||
|
||||
Within the scope of batch mode, this presents the issue that the
|
||||
:meth:`.BatchOperations.drop_constraint` method requires a constraint name
|
||||
in order to target the correct constraint.
|
||||
|
||||
In order to overcome this, the :meth:`.Operations.batch_alter_table` method supports a
|
||||
:paramref:`~.Operations.batch_alter_table.naming_convention` argument, so that
|
||||
all reflected constraints, including foreign keys that are unnamed, or
|
||||
were named but SQLAlchemy isn't loading this name, may be given a name,
|
||||
as described in :ref:`autogen_naming_conventions`. Usage is as follows::
|
||||
|
||||
naming_convention = {
|
||||
"fk":
|
||||
"fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
|
||||
}
|
||||
with self.op.batch_alter_table(
|
||||
"bar", naming_convention=naming_convention) as batch_op:
|
||||
batch_op.drop_constraint(
|
||||
"fk_bar_foo_id_foo", type_="foreignkey")
|
||||
|
||||
Note that the naming convention feature requires at least
|
||||
**SQLAlchemy 0.9.4** for support.
|
||||
|
||||
.. versionadded:: 0.7.1
|
||||
added :paramref:`~.Operations.batch_alter_table.naming_convention` to
|
||||
:meth:`.Operations.batch_alter_table`.
|
||||
|
||||
Including unnamed UNIQUE constraints
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A similar, but frustratingly slightly different, issue is that in the
|
||||
case of UNIQUE constraints, we again have the issue that SQLite allows
|
||||
unnamed UNIQUE constraints to exist on the database, however in this case,
|
||||
SQLAlchemy prior to version 1.0 doesn't reflect these constraints at all.
|
||||
It does properly reflect named unique constraints with their names, however.
|
||||
|
||||
So in this case, the workaround for foreign key names is still not sufficient
|
||||
prior to SQLAlchemy 1.0. If our table includes unnamed unique constraints,
|
||||
and we'd like them to be re-created along with the table, we need to include
|
||||
them directly, which can be via the
|
||||
:paramref:`~.Operations.batch_alter_table.table_args` argument::
|
||||
|
||||
with self.op.batch_alter_table(
|
||||
"bar", table_args=(UniqueConstraint('username'),)
|
||||
):
|
||||
batch_op.add_column(Column('foo', Integer))
|
||||
|
||||
Changing the Type of Boolean, Enum and other implicit CHECK datatypes
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The SQLAlchemy types :class:`~sqlalchemy.types.Boolean` and
|
||||
:class:`~sqlalchemy.types.Enum` are part of a category of types known as
|
||||
"schema" types; this style of type creates other structures along with the
|
||||
type itself, most commonly (but not always) a CHECK constraint.
|
||||
|
||||
Alembic handles dropping and creating the CHECK constraints here automatically,
|
||||
including in the case of batch mode. When changing the type of an existing
|
||||
column, what's necessary is that the existing type be specified fully::
|
||||
|
||||
with self.op.batch_alter_table("some_table"):
|
||||
batch_op.alter_column(
|
||||
'q', type_=Integer,
|
||||
existing_type=Boolean(create_constraint=True, constraint_name="ck1"))
|
||||
|
||||
Including CHECK constraints
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
SQLAlchemy currently doesn't reflect CHECK constraints on any backend.
|
||||
So again these must be stated explicitly if they are to be included in the
|
||||
recreated table::
|
||||
|
||||
with op.batch_alter_table("some_table", table_args=[
|
||||
CheckConstraint('x > 5')
|
||||
]) as batch_op:
|
||||
batch_op.add_column(Column('foo', Integer))
|
||||
batch_op.drop_column('bar')
|
||||
|
||||
Note this only includes CHECK constraints that are explicitly stated
|
||||
as part of the table definition, not the CHECK constraints that are generated
|
||||
by datatypes such as :class:`~sqlalchemy.types.Boolean` or
|
||||
:class:`~sqlalchemy.types.Enum`.
|
||||
|
||||
Dealing with Referencing Foreign Keys
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
It is important to note that batch table operations **do not work** with
|
||||
foreign keys that enforce referential integrity. This because the
|
||||
target table is dropped; if foreign keys refer to it, this will raise
|
||||
an error. On SQLite, whether or not foreign keys actually enforce is
|
||||
controlled by the ``PRAGMA FOREIGN KEYS`` pragma; this pragma, if in use,
|
||||
must be disabled when the workflow mode proceeds. When the operation is
|
||||
complete, the batch-migrated table will have the same name
|
||||
that it started with, so those referring foreign keys will again
|
||||
refer to this table.
|
||||
|
||||
A special case is dealing with self-referring foreign keys. Here,
|
||||
Alembic takes a special step of recreating the self-referring foreign key
|
||||
as referring to the original table name, rather than at the "temp" table,
|
||||
so that like in the case of other foreign key constraints, when the table
|
||||
is renamed to its original name, the foreign key
|
||||
again references the correct table. This operation only works when
|
||||
referential integrity is disabled, consistent with the same requirement
|
||||
for referring foreign keys from other tables.
|
||||
|
||||
.. versionchanged:: 0.8.4 Self-referring foreign keys are created with the
|
||||
target table name in batch mode, even though this table will temporarily
|
||||
not exist when dropped. This requires that the target database is not
|
||||
enforcing referential integrity.
|
||||
|
||||
When SQLite's ``PRAGMA FOREIGN KEYS`` mode is turned on, it does provide
|
||||
the service that foreign key constraints, including self-referential, will
|
||||
automatically be modified to point to their table across table renames,
|
||||
however this mode prevents the target table from being dropped as is required
|
||||
by a batch migration. Therefore it may be necessary to manipulate the
|
||||
``PRAGMA FOREIGN KEYS`` setting if a migration seeks to rename a table vs.
|
||||
batch migrate it.
|
||||
|
||||
.. _batch_offline_mode:
|
||||
|
||||
Working in Offline Mode
|
||||
-----------------------
|
||||
|
||||
In the preceding sections, we've seen how much of an emphasis the
|
||||
"move and copy" process has on using reflection in order to know the
|
||||
structure of the table that is to be copied. This means that in the typical
|
||||
case, "online" mode, where a live database connection is present so that
|
||||
:meth:`.Operations.batch_alter_table` can reflect the table from the
|
||||
database, is required; the ``--sql`` flag **cannot** be used without extra
|
||||
steps.
|
||||
|
||||
To support offline mode, the system must work without table reflection
|
||||
present, which means the full table as it intends to be created must be
|
||||
passed to :meth:`.Operations.batch_alter_table` using
|
||||
:paramref:`~.Operations.batch_alter_table.copy_from`::
|
||||
|
||||
meta = MetaData()
|
||||
some_table = Table(
|
||||
'some_table', meta,
|
||||
Column('id', Integer, primary_key=True),
|
||||
Column('bar', String(50))
|
||||
)
|
||||
|
||||
with op.batch_alter_table("some_table", copy_from=some_table) as batch_op:
|
||||
batch_op.add_column(Column('foo', Integer))
|
||||
batch_op.drop_column('bar')
|
||||
|
||||
The above use pattern is pretty tedious and quite far off from Alembic's
|
||||
preferred style of working; however, if one needs to do SQLite-compatible
|
||||
"move and copy" migrations and need them to generate flat SQL files in
|
||||
"offline" mode, there's not much alternative.
|
||||
|
||||
.. versionadded:: 0.7.6 Fully implemented the
|
||||
:paramref:`~.Operations.batch_alter_table.copy_from`
|
||||
parameter.
|
||||
|
||||
|
||||
Batch mode with Autogenerate
|
||||
----------------------------
|
||||
|
||||
The syntax of batch mode is essentially that :meth:`.Operations.batch_alter_table`
|
||||
is used to enter a batch block, and the returned :class:`.BatchOperations` context
|
||||
works just like the regular :class:`.Operations` context, except that
|
||||
the "table name" and "schema name" arguments are omitted.
|
||||
|
||||
To support rendering of migration commands in batch mode for autogenerate,
|
||||
configure the :paramref:`.EnvironmentContext.configure.render_as_batch`
|
||||
flag in ``env.py``::
|
||||
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
render_as_batch=True
|
||||
)
|
||||
|
||||
Autogenerate will now generate along the lines of::
|
||||
|
||||
def upgrade():
|
||||
### commands auto generated by Alembic - please adjust! ###
|
||||
with op.batch_alter_table('address', schema=None) as batch_op:
|
||||
batch_op.add_column(sa.Column('street', sa.String(length=50), nullable=True))
|
||||
|
||||
This mode is safe to use in all cases, as the :meth:`.Operations.batch_alter_table`
|
||||
directive by default only takes place for SQLite; other backends will
|
||||
behave just as they normally do in the absense of the batch directives.
|
||||
|
||||
Note that autogenerate support does not include "offline" mode, where
|
||||
the :paramref:`.Operations.batch_alter_table.copy_from` parameter is used.
|
||||
The table definition here would need to be entered into migration files
|
||||
manually if this is needed.
|
||||
|
||||
Batch mode with databases other than SQLite
|
||||
--------------------------------------------
|
||||
|
||||
There's an odd use case some shops have, where the "move and copy" style
|
||||
of migration is useful in some cases for databases that do already support
|
||||
ALTER. There's some cases where an ALTER operation may block access to the
|
||||
table for a long time, which might not be acceptable. "move and copy" can
|
||||
be made to work on other backends, though with a few extra caveats.
|
||||
|
||||
The batch mode directive will run the "recreate" system regardless of
|
||||
backend if the flag ``recreate='always'`` is passed::
|
||||
|
||||
with op.batch_alter_table("some_table", recreate='always') as batch_op:
|
||||
batch_op.add_column(Column('foo', Integer))
|
||||
|
||||
The issues that arise in this mode are mostly to do with constraints.
|
||||
Databases such as Postgresql and MySQL with InnoDB will enforce referential
|
||||
integrity (e.g. via foreign keys) in all cases. Unlike SQLite, it's not
|
||||
as simple to turn off referential integrity across the board (nor would it
|
||||
be desirable). Since a new table is replacing the old one, existing
|
||||
foreign key constraints which refer to the target table will need to be
|
||||
unconditionally dropped before the batch operation, and re-created to refer
|
||||
to the new table afterwards. Batch mode currently does not provide any
|
||||
automation for this.
|
||||
|
||||
The Postgresql database and possibly others also have the behavior such
|
||||
that when the new table is created, a naming conflict occurs with the
|
||||
named constraints of the new table, in that they match those of the old
|
||||
table, and on Postgresql, these names need to be unique across all tables.
|
||||
The Postgresql dialect will therefore emit a "DROP CONSTRAINT" directive
|
||||
for all constraints on the old table before the new one is created; this is
|
||||
"safe" in case of a failed operation because Postgresql also supports
|
||||
transactional DDL.
|
||||
|
||||
Note that also as is the case with SQLite, CHECK constraints need to be
|
||||
moved over between old and new table manually using the
|
||||
:paramref:`.Operations.batch_alter_table.table_args` parameter.
|
||||
|
|
@ -1,819 +0,0 @@
|
|||
.. _branches:
|
||||
|
||||
Working with Branches
|
||||
=====================
|
||||
|
||||
.. note:: Alembic 0.7.0 features an all-new versioning model that fully
|
||||
supports branch points, merge points, and long-lived, labeled branches,
|
||||
including independent branches originating from multiple bases.
|
||||
A great emphasis has been placed on there being almost no impact on the
|
||||
existing Alembic workflow, including that all commands work pretty much
|
||||
the same as they did before, the format of migration files doesn't require
|
||||
any change (though there are some changes that are recommended),
|
||||
and even the structure of the ``alembic_version``
|
||||
table does not change at all. However, most alembic commands now offer
|
||||
new features which will break out an Alembic environment into
|
||||
"branch mode", where things become a lot more intricate. Working in
|
||||
"branch mode" should be considered as a "beta" feature, with many new
|
||||
paradigms and use cases still to be stress tested in the wild.
|
||||
Please tread lightly!
|
||||
|
||||
.. versionadded:: 0.7.0
|
||||
|
||||
A **branch** describes a point in a migration stream when two or more
|
||||
versions refer to the same parent migration as their anscestor. Branches
|
||||
occur naturally when two divergent source trees, both containing Alembic
|
||||
revision files created independently within those source trees, are merged
|
||||
together into one. When this occurs, the challenge of a branch is to **merge** the
|
||||
branches into a single series of changes, so that databases established
|
||||
from either source tree individually can be upgraded to reference the merged
|
||||
result equally. Another scenario where branches are present are when we create them
|
||||
directly; either at some point in the migration stream we'd like different
|
||||
series of migrations to be managed independently (e.g. we create a tree),
|
||||
or we'd like separate migration streams for different features starting
|
||||
at the root (e.g. a *forest*). We'll illustrate all of these cases, starting
|
||||
with the most common which is a source-merge-originated branch that we'll
|
||||
merge.
|
||||
|
||||
Starting with the "account table" example we began in :ref:`create_migration`,
|
||||
assume we have our basemost version ``1975ea83b712``, which leads into
|
||||
the second revision ``ae1027a6acf``, and the migration files for these
|
||||
two revisions are checked into our source repository.
|
||||
Consider if we merged into our source repository another code branch which contained
|
||||
a revision for another table called ``shopping_cart``. This revision was made
|
||||
against our first Alembic revision, the one that generated ``account``. After
|
||||
loading the second source tree in, a new file
|
||||
``27c6a30d7c24_add_shopping_cart_table.py`` exists within our ``versions`` directory.
|
||||
Both it, as well as ``ae1027a6acf_add_a_column.py``, reference
|
||||
``1975ea83b712_add_account_table.py`` as the "downgrade" revision. To illustrate::
|
||||
|
||||
# main source tree:
|
||||
1975ea83b712 (create account table) -> ae1027a6acf (add a column)
|
||||
|
||||
# branched source tree
|
||||
1975ea83b712 (create account table) -> 27c6a30d7c24 (add shopping cart table)
|
||||
|
||||
Above, we can see ``1975ea83b712`` is our **branch point**; two distinct versions
|
||||
both refer to it as its parent. The Alembic command ``branches`` illustrates
|
||||
this fact::
|
||||
|
||||
$ alembic branches --verbose
|
||||
Rev: 1975ea83b712 (branchpoint)
|
||||
Parent: <base>
|
||||
Branches into: 27c6a30d7c24, ae1027a6acf
|
||||
Path: foo/versions/1975ea83b712_add_account_table.py
|
||||
|
||||
create account table
|
||||
|
||||
Revision ID: 1975ea83b712
|
||||
Revises:
|
||||
Create Date: 2014-11-20 13:02:46.257104
|
||||
|
||||
-> 27c6a30d7c24 (head), add shopping cart table
|
||||
-> ae1027a6acf (head), add a column
|
||||
|
||||
History shows it too, illustrating two ``head`` entries as well
|
||||
as a ``branchpoint``::
|
||||
|
||||
$ alembic history
|
||||
1975ea83b712 -> 27c6a30d7c24 (head), add shopping cart table
|
||||
1975ea83b712 -> ae1027a6acf (head), add a column
|
||||
<base> -> 1975ea83b712 (branchpoint), create account table
|
||||
|
||||
We can get a view of just the current heads using ``alembic heads``::
|
||||
|
||||
$ alembic heads --verbose
|
||||
Rev: 27c6a30d7c24 (head)
|
||||
Parent: 1975ea83b712
|
||||
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
|
||||
|
||||
add shopping cart table
|
||||
|
||||
Revision ID: 27c6a30d7c24
|
||||
Revises: 1975ea83b712
|
||||
Create Date: 2014-11-20 13:03:11.436407
|
||||
|
||||
Rev: ae1027a6acf (head)
|
||||
Parent: 1975ea83b712
|
||||
Path: foo/versions/ae1027a6acf_add_a_column.py
|
||||
|
||||
add a column
|
||||
|
||||
Revision ID: ae1027a6acf
|
||||
Revises: 1975ea83b712
|
||||
Create Date: 2014-11-20 13:02:54.849677
|
||||
|
||||
If we try to run an ``upgrade`` to the usual end target of ``head``, Alembic no
|
||||
longer considers this to be an unambiguous command. As we have more than
|
||||
one ``head``, the ``upgrade`` command wants us to provide more information::
|
||||
|
||||
$ alembic upgrade head
|
||||
FAILED: Multiple head revisions are present for given argument 'head'; please specify a specific
|
||||
target revision, '<branchname>@head' to narrow to a specific head, or 'heads' for all heads
|
||||
|
||||
The ``upgrade`` command gives us quite a few options in which we can proceed
|
||||
with our upgrade, either giving it information on *which* head we'd like to upgrade
|
||||
towards, or alternatively stating that we'd like *all* heads to be upgraded
|
||||
towards at once. However, in the typical case of two source trees being
|
||||
merged, we will want to pursue a third option, which is that we can **merge** these
|
||||
branches.
|
||||
|
||||
Merging Branches
|
||||
----------------
|
||||
|
||||
An Alembic merge is a migration file that joins two or
|
||||
more "head" files together. If the two branches we have right now can
|
||||
be said to be a "tree" structure, introducing this merge file will
|
||||
turn it into a "diamond" structure::
|
||||
|
||||
-- ae1027a6acf -->
|
||||
/ \
|
||||
<base> --> 1975ea83b712 --> --> mergepoint
|
||||
\ /
|
||||
-- 27c6a30d7c24 -->
|
||||
|
||||
We create the merge file using ``alembic merge``; with this command, we can
|
||||
pass to it an argument such as ``heads``, meaning we'd like to merge all
|
||||
heads. Or, we can pass it individual revision numbers sequentally::
|
||||
|
||||
$ alembic merge -m "merge ae1 and 27c" ae1027 27c6a
|
||||
Generating /path/to/foo/versions/53fffde5ad5_merge_ae1_and_27c.py ... done
|
||||
|
||||
Looking inside the new file, we see it as a regular migration file, with
|
||||
the only new twist is that ``down_revision`` points to both revisions::
|
||||
|
||||
"""merge ae1 and 27c
|
||||
|
||||
Revision ID: 53fffde5ad5
|
||||
Revises: ae1027a6acf, 27c6a30d7c24
|
||||
Create Date: 2014-11-20 13:31:50.811663
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '53fffde5ad5'
|
||||
down_revision = ('ae1027a6acf', '27c6a30d7c24')
|
||||
branch_labels = None
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
def upgrade():
|
||||
pass
|
||||
|
||||
|
||||
def downgrade():
|
||||
pass
|
||||
|
||||
This file is a regular migration file, and if we wish to, we may place
|
||||
:class:`.Operations` directives into the ``upgrade()`` and ``downgrade()``
|
||||
functions like any other migration file. Though it is probably best to limit
|
||||
the instructions placed here only to those that deal with any kind of
|
||||
reconciliation that is needed between the two merged branches, if any.
|
||||
|
||||
The ``heads`` command now illustrates that the multiple heads in our
|
||||
``versions/`` directory have been resolved into our new head::
|
||||
|
||||
$ alembic heads --verbose
|
||||
Rev: 53fffde5ad5 (head) (mergepoint)
|
||||
Merges: ae1027a6acf, 27c6a30d7c24
|
||||
Path: foo/versions/53fffde5ad5_merge_ae1_and_27c.py
|
||||
|
||||
merge ae1 and 27c
|
||||
|
||||
Revision ID: 53fffde5ad5
|
||||
Revises: ae1027a6acf, 27c6a30d7c24
|
||||
Create Date: 2014-11-20 13:31:50.811663
|
||||
|
||||
History shows a similar result, as the mergepoint becomes our head::
|
||||
|
||||
$ alembic history
|
||||
ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5 (head) (mergepoint), merge ae1 and 27c
|
||||
1975ea83b712 -> ae1027a6acf, add a column
|
||||
1975ea83b712 -> 27c6a30d7c24, add shopping cart table
|
||||
<base> -> 1975ea83b712 (branchpoint), create account table
|
||||
|
||||
With a single ``head`` target, a generic ``upgrade`` can proceed::
|
||||
|
||||
$ alembic upgrade head
|
||||
INFO [alembic.migration] Context impl PostgresqlImpl.
|
||||
INFO [alembic.migration] Will assume transactional DDL.
|
||||
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
|
||||
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
|
||||
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
|
||||
INFO [alembic.migration] Running upgrade ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5, merge ae1 and 27c
|
||||
|
||||
|
||||
.. topic:: merge mechanics
|
||||
|
||||
The upgrade process traverses through all of our migration files using
|
||||
a **topological sorting** algorithm, treating the list of migration
|
||||
files not as a linked list, but as a **directed acyclic graph**. The starting
|
||||
points of this traversal are the **current heads** within our database,
|
||||
and the end point is the "head" revision or revisions specified.
|
||||
|
||||
When a migration proceeds across a point at which there are multiple heads,
|
||||
the ``alembic_version`` table will at that point store *multiple* rows,
|
||||
one for each head. Our migration process above will emit SQL against
|
||||
``alembic_version`` along these lines:
|
||||
|
||||
.. sourcecode:: sql
|
||||
|
||||
-- Running upgrade -> 1975ea83b712, create account table
|
||||
INSERT INTO alembic_version (version_num) VALUES ('1975ea83b712')
|
||||
|
||||
-- Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
|
||||
UPDATE alembic_version SET version_num='27c6a30d7c24' WHERE alembic_version.version_num = '1975ea83b712'
|
||||
|
||||
-- Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
|
||||
INSERT INTO alembic_version (version_num) VALUES ('ae1027a6acf')
|
||||
|
||||
-- Running upgrade ae1027a6acf, 27c6a30d7c24 -> 53fffde5ad5, merge ae1 and 27c
|
||||
DELETE FROM alembic_version WHERE alembic_version.version_num = 'ae1027a6acf'
|
||||
UPDATE alembic_version SET version_num='53fffde5ad5' WHERE alembic_version.version_num = '27c6a30d7c24'
|
||||
|
||||
At the point at which both ``27c6a30d7c24`` and ``ae1027a6acf`` exist within our
|
||||
database, both values are present in ``alembic_version``, which now has
|
||||
two rows. If we upgrade to these two versions alone, then stop and
|
||||
run ``alembic current``, we will see this::
|
||||
|
||||
$ alembic current --verbose
|
||||
Current revision(s) for postgresql://scott:XXXXX@localhost/test:
|
||||
Rev: ae1027a6acf
|
||||
Parent: 1975ea83b712
|
||||
Path: foo/versions/ae1027a6acf_add_a_column.py
|
||||
|
||||
add a column
|
||||
|
||||
Revision ID: ae1027a6acf
|
||||
Revises: 1975ea83b712
|
||||
Create Date: 2014-11-20 13:02:54.849677
|
||||
|
||||
Rev: 27c6a30d7c24
|
||||
Parent: 1975ea83b712
|
||||
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
|
||||
|
||||
add shopping cart table
|
||||
|
||||
Revision ID: 27c6a30d7c24
|
||||
Revises: 1975ea83b712
|
||||
Create Date: 2014-11-20 13:03:11.436407
|
||||
|
||||
A key advantage to the ``merge`` process is that it will
|
||||
run equally well on databases that were present on version ``ae1027a6acf``
|
||||
alone, versus databases that were present on version ``27c6a30d7c24`` alone;
|
||||
whichever version was not yet applied, will be applied before the merge point
|
||||
can be crossed. This brings forth a way of thinking about a merge file,
|
||||
as well as about any Alembic revision file. As they are considered to
|
||||
be "nodes" within a set that is subject to topological sorting, each
|
||||
"node" is a point that cannot be crossed until all of its dependencies
|
||||
are satisfied.
|
||||
|
||||
Prior to Alembic's support of merge points, the use case of databases
|
||||
sitting on different heads was basically impossible to reconcile; having
|
||||
to manually splice the head files together invariably meant that one migration
|
||||
would occur before the other, thus being incompatible with databases that
|
||||
were present on the other migration.
|
||||
|
||||
Working with Explicit Branches
|
||||
------------------------------
|
||||
|
||||
The ``alembic upgrade`` command hinted at other options besides merging when
|
||||
dealing with multiple heads. Let's back up and assume we're back where
|
||||
we have as our heads just ``ae1027a6acf`` and ``27c6a30d7c24``::
|
||||
|
||||
$ alembic heads
|
||||
27c6a30d7c24
|
||||
ae1027a6acf
|
||||
|
||||
Earlier, when we did ``alembic upgrade head``, it gave us an error which
|
||||
suggested ``please specify a specific target revision, '<branchname>@head' to
|
||||
narrow to a specific head, or 'heads' for all heads`` in order to proceed
|
||||
without merging. Let's cover those cases.
|
||||
|
||||
Referring to all heads at once
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``heads`` identifier is a lot like ``head``, except it explicitly refers
|
||||
to *all* heads at once. That is, it's like telling Alembic to do the operation
|
||||
for both ``ae1027a6acf`` and ``27c6a30d7c24`` simultaneously. If we started
|
||||
from a fresh database and ran ``upgrade heads`` we'd see::
|
||||
|
||||
$ alembic upgrade heads
|
||||
INFO [alembic.migration] Context impl PostgresqlImpl.
|
||||
INFO [alembic.migration] Will assume transactional DDL.
|
||||
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
|
||||
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
|
||||
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
|
||||
|
||||
Since we've upgraded to ``heads``, and we do in fact have more than one head,
|
||||
that means these two distinct heads are now in our ``alembic_version`` table.
|
||||
We can see this if we run ``alembic current``::
|
||||
|
||||
$ alembic current
|
||||
ae1027a6acf (head)
|
||||
27c6a30d7c24 (head)
|
||||
|
||||
That means there's two rows in ``alembic_version`` right now. If we downgrade
|
||||
one step at a time, Alembic will **delete** from the ``alembic_version`` table
|
||||
each branch that's closed out, until only one branch remains; then it will
|
||||
continue updating the single value down to the previous versions::
|
||||
|
||||
$ alembic downgrade -1
|
||||
INFO [alembic.migration] Running downgrade ae1027a6acf -> 1975ea83b712, add a column
|
||||
|
||||
$ alembic current
|
||||
27c6a30d7c24 (head)
|
||||
|
||||
$ alembic downgrade -1
|
||||
INFO [alembic.migration] Running downgrade 27c6a30d7c24 -> 1975ea83b712, add shopping cart table
|
||||
|
||||
$ alembic current
|
||||
1975ea83b712 (branchpoint)
|
||||
|
||||
$ alembic downgrade -1
|
||||
INFO [alembic.migration] Running downgrade 1975ea83b712 -> , create account table
|
||||
|
||||
$ alembic current
|
||||
|
||||
Referring to a Specific Version
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
We can pass a specific version number to ``upgrade``. Alembic will ensure that
|
||||
all revisions upon which this version depends are invoked, and nothing more.
|
||||
So if we ``upgrade`` either to ``27c6a30d7c24`` or ``ae1027a6acf`` specifically,
|
||||
it guarantees that ``1975ea83b712`` will have been applied, but not that
|
||||
any "sibling" versions are applied::
|
||||
|
||||
$ alembic upgrade 27c6a
|
||||
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
|
||||
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
|
||||
|
||||
With ``1975ea83b712`` and ``27c6a30d7c24`` applied, ``ae1027a6acf`` is just
|
||||
a single additional step::
|
||||
|
||||
$ alembic upgrade ae102
|
||||
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
|
||||
|
||||
Working with Branch Labels
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To satisfy the use case where an environment has long-lived branches, especially
|
||||
independent branches as will be discussed in the next section, Alembic supports
|
||||
the concept of **branch labels**. These are string values that are present
|
||||
within the migration file, using the new identifier ``branch_labels``.
|
||||
For example, if we want to refer to the "shopping cart" branch using the name
|
||||
"shoppingcart", we can add that name to our file
|
||||
``27c6a30d7c24_add_shopping_cart_table.py``::
|
||||
|
||||
"""add shopping cart table
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '27c6a30d7c24'
|
||||
down_revision = '1975ea83b712'
|
||||
branch_labels = ('shoppingcart',)
|
||||
|
||||
# ...
|
||||
|
||||
The ``branch_labels`` attribute refers to a string name, or a tuple
|
||||
of names, which will now apply to this revision, all descendants of this
|
||||
revision, as well as all ancestors of this revision up until the preceding
|
||||
branch point, in this case ``1975ea83b712``. We can see the ``shoppingcart``
|
||||
label applied to this revision::
|
||||
|
||||
$ alembic history
|
||||
1975ea83b712 -> 27c6a30d7c24 (shoppingcart) (head), add shopping cart table
|
||||
1975ea83b712 -> ae1027a6acf (head), add a column
|
||||
<base> -> 1975ea83b712 (branchpoint), create account table
|
||||
|
||||
With the label applied, the name ``shoppingcart`` now serves as an alias
|
||||
for the ``27c6a30d7c24`` revision specifically. We can illustrate this
|
||||
by showing it with ``alembic show``::
|
||||
|
||||
$ alembic show shoppingcart
|
||||
Rev: 27c6a30d7c24 (head)
|
||||
Parent: 1975ea83b712
|
||||
Branch names: shoppingcart
|
||||
Path: foo/versions/27c6a30d7c24_add_shopping_cart_table.py
|
||||
|
||||
add shopping cart table
|
||||
|
||||
Revision ID: 27c6a30d7c24
|
||||
Revises: 1975ea83b712
|
||||
Create Date: 2014-11-20 13:03:11.436407
|
||||
|
||||
However, when using branch labels, we usually want to use them using a syntax
|
||||
known as "branch at" syntax; this syntax allows us to state that we want to
|
||||
use a specific revision, let's say a "head" revision, in terms of a *specific*
|
||||
branch. While normally, we can't refer to ``alembic upgrade head`` when
|
||||
there's multiple heads, we *can* refer to this head specifcally using
|
||||
``shoppingcart@head`` syntax::
|
||||
|
||||
$ alembic upgrade shoppingcart@head
|
||||
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
|
||||
|
||||
The ``shoppingcart@head`` syntax becomes important to us if we wish to
|
||||
add new migration files to our versions directory while maintaining multiple
|
||||
branches. Just like the ``upgrade`` command, if we attempted to add a new
|
||||
revision file to our multiple-heads layout without a specific parent revision,
|
||||
we'd get a familiar error::
|
||||
|
||||
$ alembic revision -m "add a shopping cart column"
|
||||
FAILED: Multiple heads are present; please specify the head revision on
|
||||
which the new revision should be based, or perform a merge.
|
||||
|
||||
The ``alembic revision`` command is pretty clear in what we need to do;
|
||||
to add our new revision specifically to the ``shoppingcart`` branch,
|
||||
we use the ``--head`` argument, either with the specific revision identifier
|
||||
``27c6a30d7c24``, or more generically using our branchname ``shoppingcart@head``::
|
||||
|
||||
$ alembic revision -m "add a shopping cart column" --head shoppingcart@head
|
||||
Generating /path/to/foo/versions/d747a8a8879_add_a_shopping_cart_column.py ... done
|
||||
|
||||
``alembic history`` shows both files now part of the ``shoppingcart`` branch::
|
||||
|
||||
$ alembic history
|
||||
1975ea83b712 -> ae1027a6acf (head), add a column
|
||||
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
|
||||
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
|
||||
<base> -> 1975ea83b712 (branchpoint), create account table
|
||||
|
||||
We can limit our history operation just to this branch as well::
|
||||
|
||||
$ alembic history -r shoppingcart:
|
||||
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
|
||||
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
|
||||
|
||||
If we want to illustrate the path of ``shoppingcart`` all the way from the
|
||||
base, we can do that as follows::
|
||||
|
||||
$ alembic history -r :shoppingcart@head
|
||||
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
|
||||
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
|
||||
<base> -> 1975ea83b712 (branchpoint), create account table
|
||||
|
||||
We can run this operation from the "base" side as well, but we get a different
|
||||
result::
|
||||
|
||||
$ alembic history -r shoppingcart@base:
|
||||
1975ea83b712 -> ae1027a6acf (head), add a column
|
||||
27c6a30d7c24 -> d747a8a8879 (shoppingcart) (head), add a shopping cart column
|
||||
1975ea83b712 -> 27c6a30d7c24 (shoppingcart), add shopping cart table
|
||||
<base> -> 1975ea83b712 (branchpoint), create account table
|
||||
|
||||
When we list from ``shoppingcart@base`` without an endpoint, it's really shorthand
|
||||
for ``-r shoppingcart@base:heads``, e.g. all heads, and since ``shoppingcart@base``
|
||||
is the same "base" shared by the ``ae1027a6acf`` revision, we get that
|
||||
revision in our listing as well. The ``<branchname>@base`` syntax can be
|
||||
useful when we are dealing with individual bases, as we'll see in the next
|
||||
section.
|
||||
|
||||
The ``<branchname>@head`` format can also be used with revision numbers
|
||||
instead of branch names, though this is less convenient. If we wanted to
|
||||
add a new revision to our branch that includes the un-labeled ``ae1027a6acf``,
|
||||
if this weren't a head already, we could ask for the "head of the branch
|
||||
that includes ``ae1027a6acf``" as follows::
|
||||
|
||||
$ alembic revision -m "add another account column" --head ae10@head
|
||||
Generating /path/to/foo/versions/55af2cb1c267_add_another_account_column.py ... done
|
||||
|
||||
More Label Syntaxes
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The ``heads`` symbol can be combined with a branch label, in the case that
|
||||
your labeled branch itself breaks off into multiple branches::
|
||||
|
||||
$ alembic upgrade shoppingcart@heads
|
||||
|
||||
Relative identifiers, as introduced in :ref:`relative_migrations`,
|
||||
work with labels too. For example, upgrading to ``shoppingcart@+2``
|
||||
means to upgrade from current heads on "shoppingcart" upwards two revisions::
|
||||
|
||||
$ alembic upgrade shoppingcart@+2
|
||||
|
||||
This kind of thing works from history as well::
|
||||
|
||||
$ alembic history -r current:shoppingcart@+2
|
||||
|
||||
The newer ``relnum+delta`` format can be combined as well, for example
|
||||
if we wanted to list along ``shoppingcart`` up until two revisions
|
||||
before the head::
|
||||
|
||||
$ alembic history -r :shoppingcart@head-2
|
||||
|
||||
.. _multiple_bases:
|
||||
|
||||
Working with Multiple Bases
|
||||
---------------------------
|
||||
|
||||
.. note:: The multiple base feature is intended to allow for multiple Alembic
|
||||
versioning lineages which **share the same alembic_version table**. This is
|
||||
so that individual revisions within the lineages can have cross-dependencies
|
||||
on each other. For the simpler case where one project has multiple,
|
||||
**completely independent** revision lineages that refer to **separate**
|
||||
alembic_version tables, see the example in :ref:`multiple_environments`.
|
||||
|
||||
We've seen in the previous section that ``alembic upgrade`` is fine
|
||||
if we have multiple heads, ``alembic revision`` allows us to tell it which
|
||||
"head" we'd like to associate our new revision file with, and branch labels
|
||||
allow us to assign names to branches that we can use in subsequent commands.
|
||||
Let's put all these together and refer to a new "base", that is, a whole
|
||||
new tree of revision files that will be semi-independent of the account/shopping
|
||||
cart revisions we've been working with. This new tree will deal with
|
||||
database tables involving "networking".
|
||||
|
||||
.. _multiple_version_directories:
|
||||
|
||||
Setting up Multiple Version Directories
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
While optional, it is often the case that when working with multiple bases,
|
||||
we'd like different sets of version files to exist within their own directories;
|
||||
typically, if an application is organized into several sub-modules, each
|
||||
one would have a version directory containing migrations pertinent to
|
||||
that module. So to start out, we can edit ``alembic.ini`` to refer
|
||||
to multiple directories; we'll also state the current ``versions``
|
||||
directory as one of them::
|
||||
|
||||
# version location specification; this defaults
|
||||
# to foo/versions. When using multiple version
|
||||
# directories, initial revisions must be specified with --version-path
|
||||
version_locations = %(here)s/model/networking %(here)s/alembic/versions
|
||||
|
||||
The new directory ``%(here)s/model/networking`` is in terms of where
|
||||
the ``alembic.ini`` file is, as we are using the symbol ``%(here)s`` which
|
||||
resolves to this location. When we create our first new revision
|
||||
targeted at this directory,
|
||||
``model/networking`` will be created automatically if it does not
|
||||
exist yet. Once we've created a revision here, the path is used automatically
|
||||
when generating subsequent revision files that refer to this revision tree.
|
||||
|
||||
Creating a Labeled Base Revision
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
We also want our new branch to have its own name, and for that we want to
|
||||
apply a branch label to the base. In order to achieve this using the
|
||||
``alembic revision`` command without editing, we need to ensure our
|
||||
``script.py.mako`` file, used
|
||||
for generating new revision files, has the appropriate substitutions present.
|
||||
If Alembic version 0.7.0 or greater was used to generate the original
|
||||
migration environment, this is already done. However when working with an older
|
||||
environment, ``script.py.mako`` needs to have this directive added, typically
|
||||
underneath the ``down_revision`` directive::
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = ${repr(up_revision)}
|
||||
down_revision = ${repr(down_revision)}
|
||||
|
||||
# add this here in order to use revision with branch_label
|
||||
branch_labels = ${repr(branch_labels)}
|
||||
|
||||
With this in place, we can create a new revision file, starting up a branch
|
||||
that will deal with database tables involving networking; we specify the
|
||||
``--head`` version of ``base``, a ``--branch-label`` of ``networking``,
|
||||
and the directory we want this first revision file to be
|
||||
placed in with ``--version-path``::
|
||||
|
||||
$ alembic revision -m "create networking branch" --head=base --branch-label=networking --version-path=model/networking
|
||||
Creating directory /path/to/foo/model/networking ... done
|
||||
Generating /path/to/foo/model/networking/3cac04ae8714_create_networking_branch.py ... done
|
||||
|
||||
If we ran the above command and we didn't have the newer ``script.py.mako``
|
||||
directive, we'd get this error::
|
||||
|
||||
FAILED: Version 3cac04ae8714 specified branch_labels networking, however
|
||||
the migration file foo/model/networking/3cac04ae8714_create_networking_branch.py
|
||||
does not have them; have you upgraded your script.py.mako to include the 'branch_labels'
|
||||
section?
|
||||
|
||||
When we receive the above error, and we would like to try again, we need to
|
||||
either **delete** the incorrectly generated file in order to run ``revision``
|
||||
again, *or* we can edit the ``3cac04ae8714_create_networking_branch.py``
|
||||
directly to add the ``branch_labels`` in of our choosing.
|
||||
|
||||
Running with Multiple Bases
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Once we have a new, permanent (for as long as we desire it to be)
|
||||
base in our system, we'll always have multiple heads present::
|
||||
|
||||
$ alembic heads
|
||||
3cac04ae8714 (networking) (head)
|
||||
27c6a30d7c24 (shoppingcart) (head)
|
||||
ae1027a6acf (head)
|
||||
|
||||
When we want to add a new revision file to ``networking``, we specify
|
||||
``networking@head`` as the ``--head``. The appropriate version directory
|
||||
is now selected automatically based on the head we choose::
|
||||
|
||||
$ alembic revision -m "add ip number table" --head=networking@head
|
||||
Generating /path/to/foo/model/networking/109ec7d132bf_add_ip_number_table.py ... done
|
||||
|
||||
It's important that we refer to the head using ``networking@head``; if we
|
||||
only refer to ``networking``, that refers to only ``3cac04ae8714`` specifically;
|
||||
if we specify this and it's not a head, ``alembic revision`` will make sure
|
||||
we didn't mean to specify the head::
|
||||
|
||||
$ alembic revision -m "add DNS table" --head=networking
|
||||
FAILED: Revision 3cac04ae8714 is not a head revision; please
|
||||
specify --splice to create a new branch from this revision
|
||||
|
||||
As mentioned earlier, as this base is independent, we can view its history
|
||||
from the base using ``history -r networking@base:``::
|
||||
|
||||
$ alembic history -r networking@base:
|
||||
109ec7d132bf -> 29f859a13ea (networking) (head), add DNS table
|
||||
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
|
||||
<base> -> 3cac04ae8714 (networking), create networking branch
|
||||
|
||||
At the moment, this is the same output we'd get at this point if we used
|
||||
``-r :networking@head``. However, that will change later on as we use
|
||||
additional directives.
|
||||
|
||||
We may now run upgrades or downgrades freely, among individual branches
|
||||
(let's assume a clean database again)::
|
||||
|
||||
$ alembic upgrade networking@head
|
||||
INFO [alembic.migration] Running upgrade -> 3cac04ae8714, create networking branch
|
||||
INFO [alembic.migration] Running upgrade 3cac04ae8714 -> 109ec7d132bf, add ip number table
|
||||
INFO [alembic.migration] Running upgrade 109ec7d132bf -> 29f859a13ea, add DNS table
|
||||
|
||||
or against the whole thing using ``heads``::
|
||||
|
||||
$ alembic upgrade heads
|
||||
INFO [alembic.migration] Running upgrade -> 1975ea83b712, create account table
|
||||
INFO [alembic.migration] Running upgrade 1975ea83b712 -> 27c6a30d7c24, add shopping cart table
|
||||
INFO [alembic.migration] Running upgrade 27c6a30d7c24 -> d747a8a8879, add a shopping cart column
|
||||
INFO [alembic.migration] Running upgrade 1975ea83b712 -> ae1027a6acf, add a column
|
||||
INFO [alembic.migration] Running upgrade ae1027a6acf -> 55af2cb1c267, add another account column
|
||||
|
||||
Branch Dependencies
|
||||
-------------------
|
||||
|
||||
When working with multiple roots, it is expected that these different
|
||||
revision streams will need to refer to one another. For example, a new
|
||||
revision in ``networking`` which needs to refer to the ``account``
|
||||
table will want to establish ``55af2cb1c267, add another account column``,
|
||||
the last revision that
|
||||
works with the account table, as a dependency. From a graph perspective,
|
||||
this means nothing more that the new file will feature both
|
||||
``55af2cb1c267, add another account column`` and ``29f859a13ea, add DNS table`` as "down" revisions,
|
||||
and looks just as though we had merged these two branches together. However,
|
||||
we don't want to consider these as "merged"; we want the two revision
|
||||
streams to *remain independent*, even though a version in ``networking``
|
||||
is going to reach over into the other stream. To support this use case,
|
||||
Alembic provides a directive known as ``depends_on``, which allows
|
||||
a revision file to refer to another as a "dependency", very similar to
|
||||
an entry in ``down_revision`` from a graph perspective, but different
|
||||
from a semantic perspective.
|
||||
|
||||
To use ``depends_on``, we can specify it as part of our ``alembic revision``
|
||||
command::
|
||||
|
||||
$ alembic revision -m "add ip account table" --head=networking@head --depends-on=55af2cb1c267
|
||||
Generating /path/to/foo/model/networking/2a95102259be_add_ip_account_table.py ... done
|
||||
|
||||
Within our migration file, we'll see this new directive present::
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '2a95102259be'
|
||||
down_revision = '29f859a13ea'
|
||||
branch_labels = None
|
||||
depends_on='55af2cb1c267'
|
||||
|
||||
``depends_on`` may be either a real revision number or a branch
|
||||
name. When specified at the command line, a resolution from a
|
||||
partial revision number will work as well. It can refer
|
||||
to any number of dependent revisions as well; for example, if we were
|
||||
to run the command::
|
||||
|
||||
$ alembic revision -m "add ip account table" \\
|
||||
--head=networking@head \\
|
||||
--depends-on=55af2cb1c267 --depends-on=d747a --depends-on=fa445
|
||||
Generating /path/to/foo/model/networking/2a95102259be_add_ip_account_table.py ... done
|
||||
|
||||
We'd see inside the file::
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '2a95102259be'
|
||||
down_revision = '29f859a13ea'
|
||||
branch_labels = None
|
||||
depends_on = ('55af2cb1c267', 'd747a8a8879', 'fa4456a9201')
|
||||
|
||||
We also can of course add or alter this value within the file manually after
|
||||
it is generated, rather than using the ``--depends-on`` argument.
|
||||
|
||||
.. versionadded:: 0.8 The ``depends_on`` attribute may be set directly
|
||||
from the ``alembic revision`` command, rather than editing the file
|
||||
directly. ``depends_on`` identifiers may also be specified as
|
||||
branch names at the command line or directly within the migration file.
|
||||
The values may be specified as partial revision numbers from the command
|
||||
line which will be resolved to full revision numbers in the output file.
|
||||
|
||||
We can see the effect this directive has when we view the history
|
||||
of the ``networking`` branch in terms of "heads", e.g., all the revisions that
|
||||
are descendants::
|
||||
|
||||
$ alembic history -r :networking@head
|
||||
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
|
||||
109ec7d132bf -> 29f859a13ea (networking), add DNS table
|
||||
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
|
||||
<base> -> 3cac04ae8714 (networking), create networking branch
|
||||
ae1027a6acf -> 55af2cb1c267 (effective head), add another account column
|
||||
1975ea83b712 -> ae1027a6acf, Add a column
|
||||
<base> -> 1975ea83b712 (branchpoint), create account table
|
||||
|
||||
What we see is that the full history of the ``networking`` branch, in terms
|
||||
of an "upgrade" to the "head", will include that the tree building
|
||||
up ``55af2cb1c267, add another account column``
|
||||
will be pulled in first. Interstingly, we don't see this displayed
|
||||
when we display history in the other direction, e.g. from ``networking@base``::
|
||||
|
||||
$ alembic history -r networking@base:
|
||||
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
|
||||
109ec7d132bf -> 29f859a13ea (networking), add DNS table
|
||||
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
|
||||
<base> -> 3cac04ae8714 (networking), create networking branch
|
||||
|
||||
The reason for the discrepancy is that displaying history from the base
|
||||
shows us what would occur if we ran a downgrade operation, instead of an
|
||||
upgrade. If we downgraded all the files in ``networking`` using
|
||||
``networking@base``, the dependencies aren't affected, they're left in place.
|
||||
|
||||
We also see something odd if we view ``heads`` at the moment::
|
||||
|
||||
$ alembic heads
|
||||
2a95102259be (networking) (head)
|
||||
27c6a30d7c24 (shoppingcart) (head)
|
||||
55af2cb1c267 (effective head)
|
||||
|
||||
The head file that we used as a "dependency", ``55af2cb1c267``, is displayed
|
||||
as an "effective" head, which we can see also in the history display earlier.
|
||||
What this means is that at the moment, if we were to upgrade all versions
|
||||
to the top, the ``55af2cb1c267`` revision number would not actually be
|
||||
present in the ``alembic_version`` table; this is because it does not have
|
||||
a branch of its own subsequent to the ``2a95102259be`` revision which depends
|
||||
on it::
|
||||
|
||||
$ alembic upgrade heads
|
||||
INFO [alembic.migration] Running upgrade 29f859a13ea, 55af2cb1c267 -> 2a95102259be, add ip account table
|
||||
|
||||
$ alembic current
|
||||
2a95102259be (head)
|
||||
27c6a30d7c24 (head)
|
||||
|
||||
The entry is still displayed in ``alembic heads`` because Alembic knows that
|
||||
even though this revision isn't a "real" head, it's still something that
|
||||
we developers consider semantically to be a head, so it's displayed, noting
|
||||
its special status so that we don't get quite as confused when we don't
|
||||
see it within ``alembic current``.
|
||||
|
||||
If we add a new revision onto ``55af2cb1c267``, the branch again becomes
|
||||
a "real" branch which can have its own entry in the database::
|
||||
|
||||
$ alembic revision -m "more account changes" --head=55af2cb@head
|
||||
Generating /path/to/foo/versions/34e094ad6ef1_more_account_changes.py ... done
|
||||
|
||||
$ alembic upgrade heads
|
||||
INFO [alembic.migration] Running upgrade 55af2cb1c267 -> 34e094ad6ef1, more account changes
|
||||
|
||||
$ alembic current
|
||||
2a95102259be (head)
|
||||
27c6a30d7c24 (head)
|
||||
34e094ad6ef1 (head)
|
||||
|
||||
|
||||
For posterity, the revision tree now looks like::
|
||||
|
||||
$ alembic history
|
||||
29f859a13ea (55af2cb1c267) -> 2a95102259be (networking) (head), add ip account table
|
||||
109ec7d132bf -> 29f859a13ea (networking), add DNS table
|
||||
3cac04ae8714 -> 109ec7d132bf (networking), add ip number table
|
||||
<base> -> 3cac04ae8714 (networking), create networking branch
|
||||
1975ea83b712 -> 27c6a30d7c24 (shoppingcart) (head), add shopping cart table
|
||||
55af2cb1c267 -> 34e094ad6ef1 (head), more account changes
|
||||
ae1027a6acf -> 55af2cb1c267, add another account column
|
||||
1975ea83b712 -> ae1027a6acf, Add a column
|
||||
<base> -> 1975ea83b712 (branchpoint), create account table
|
||||
|
||||
|
||||
--- 27c6 --> d747 --> <head>
|
||||
/ (shoppingcart)
|
||||
<base> --> 1975 -->
|
||||
\
|
||||
--- ae10 --> 55af --> <head>
|
||||
^
|
||||
+--------+ (dependency)
|
||||
|
|
||||
|
|
||||
<base> --> 3782 -----> 109e ----> 29f8 ---> 2a95 --> <head>
|
||||
(networking)
|
||||
|
||||
|
||||
If there's any point to be made here, it's if you are too freely branching, merging
|
||||
and labeling, things can get pretty crazy! Hence the branching system should
|
||||
be used carefully and thoughtfully for best results.
|
||||
|
File diff suppressed because it is too large
Load Diff
|
@ -1,232 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# Alembic documentation build configuration file, created by
|
||||
# sphinx-quickstart on Sat May 1 12:47:55 2010.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
sys.path.append(os.path.abspath('.'))
|
||||
|
||||
# If your extensions are in another directory, add it here. If the directory
|
||||
# is relative to the documentation root, use os.path.abspath to make it
|
||||
# absolute, like shown here.
|
||||
sys.path.insert(0, os.path.abspath('../../'))
|
||||
|
||||
import alembic
|
||||
|
||||
# -- General configuration -----------------------------------------------------
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be extensions
|
||||
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx',
|
||||
'changelog', 'sphinx_paramlinks']
|
||||
|
||||
# tags to sort on inside of sections
|
||||
changelog_sections = ["changed", "feature", "bug", "moved", "removed"]
|
||||
|
||||
changelog_render_ticket = "https://bitbucket.org/zzzeek/alembic/issue/%s/"
|
||||
changelog_render_pullreq = "https://bitbucket.org/zzzeek/alembic/pull-request/%s"
|
||||
|
||||
changelog_render_pullreq = {
|
||||
"bitbucket": "https://bitbucket.org/zzzeek/alembic/pull-request/%s",
|
||||
"default": "https://bitbucket.org/zzzeek/alembic/pull-request/%s",
|
||||
"github": "https://github.com/zzzeek/alembic/pull/%s",
|
||||
}
|
||||
|
||||
autodoc_default_flags = ["members"]
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.rst'
|
||||
|
||||
# The encoding of source files.
|
||||
#source_encoding = 'utf-8'
|
||||
|
||||
nitpicky = True
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'index'
|
||||
|
||||
# General information about the project.
|
||||
project = u'Alembic'
|
||||
copyright = u'2010-2017, Mike Bayer'
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# The short X.Y version.
|
||||
version = alembic.__version__
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
release = alembic.__version__
|
||||
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
#today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
#today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of documents that shouldn't be included in the build.
|
||||
#unused_docs = []
|
||||
|
||||
# List of directories, relative to source directory, that shouldn't be searched
|
||||
# for source files.
|
||||
exclude_trees = []
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all documents.
|
||||
#default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
#add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
#add_module_names = True
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
#show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
#modindex_common_prefix = []
|
||||
|
||||
|
||||
# -- Options for HTML output ---------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. Major themes that come with
|
||||
# Sphinx are currently 'default' and 'sphinxdoc'.
|
||||
html_theme = 'nature'
|
||||
|
||||
html_style = "nature_override.css"
|
||||
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
#html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
#html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
#html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
#html_short_title = None
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
#html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
#html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
#html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
#html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
html_sidebars = {"**": ["site_custom_sidebars.html", "localtoc.html", "searchbox.html", "relations.html"]}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
#html_additional_pages = {}
|
||||
|
||||
# If false, no module index is generated.
|
||||
#html_use_modindex = True
|
||||
|
||||
# If false, no index is generated.
|
||||
#html_use_index = True
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
#html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
#html_show_sourcelink = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
#html_use_opensearch = ''
|
||||
|
||||
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
#html_file_suffix = ''
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'Alembicdoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output --------------------------------------------------
|
||||
|
||||
# The paper size ('letter' or 'a4').
|
||||
#latex_paper_size = 'letter'
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#latex_font_size = '10pt'
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass [howto/manual]).
|
||||
latex_documents = [
|
||||
('index', 'Alembic.tex', u'Alembic Documentation',
|
||||
u'Mike Bayer', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
#latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
#latex_use_parts = False
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#latex_preamble = ''
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
#latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
#latex_use_modindex = True
|
||||
|
||||
|
||||
#{'python': ('http://docs.python.org/3.2', None)}
|
||||
|
||||
autoclass_content = "both"
|
||||
|
||||
intersphinx_mapping = {
|
||||
'sqla':('http://www.sqlalchemy.org/docs/', None),
|
||||
}
|
|
@ -1,883 +0,0 @@
|
|||
========
|
||||
Cookbook
|
||||
========
|
||||
|
||||
A collection of "How-Tos" highlighting popular ways to extend
|
||||
Alembic.
|
||||
|
||||
.. note::
|
||||
|
||||
This is a new section where we catalogue various "how-tos"
|
||||
based on user requests. It is often the case that users
|
||||
will request a feature only to learn it can be provided with
|
||||
a simple customization.
|
||||
|
||||
.. _building_uptodate:
|
||||
|
||||
Building an Up to Date Database from Scratch
|
||||
=============================================
|
||||
|
||||
There's a theory of database migrations that says that the revisions in existence for a database should be
|
||||
able to go from an entirely blank schema to the finished product, and back again. Alembic can roll
|
||||
this way. Though we think it's kind of overkill, considering that SQLAlchemy itself can emit
|
||||
the full CREATE statements for any given model using :meth:`~sqlalchemy.schema.MetaData.create_all`. If you check out
|
||||
a copy of an application, running this will give you the entire database in one shot, without the need
|
||||
to run through all those migration files, which are instead tailored towards applying incremental
|
||||
changes to an existing database.
|
||||
|
||||
Alembic can integrate with a :meth:`~sqlalchemy.schema.MetaData.create_all` script quite easily. After running the
|
||||
create operation, tell Alembic to create a new version table, and to stamp it with the most recent
|
||||
revision (i.e. ``head``)::
|
||||
|
||||
# inside of a "create the database" script, first create
|
||||
# tables:
|
||||
my_metadata.create_all(engine)
|
||||
|
||||
# then, load the Alembic configuration and generate the
|
||||
# version table, "stamping" it with the most recent rev:
|
||||
from alembic.config import Config
|
||||
from alembic import command
|
||||
alembic_cfg = Config("/path/to/yourapp/alembic.ini")
|
||||
command.stamp(alembic_cfg, "head")
|
||||
|
||||
When this approach is used, the application can generate the database using normal SQLAlchemy
|
||||
techniques instead of iterating through hundreds of migration scripts. Now, the purpose of the
|
||||
migration scripts is relegated just to movement between versions on out-of-date databases, not
|
||||
*new* databases. You can now remove old migration files that are no longer represented
|
||||
on any existing environments.
|
||||
|
||||
To prune old migration files, simply delete the files. Then, in the earliest, still-remaining
|
||||
migration file, set ``down_revision`` to ``None``::
|
||||
|
||||
# replace this:
|
||||
#down_revision = '290696571ad2'
|
||||
|
||||
# with this:
|
||||
down_revision = None
|
||||
|
||||
That file now becomes the "base" of the migration series.
|
||||
|
||||
Conditional Migration Elements
|
||||
==============================
|
||||
|
||||
This example features the basic idea of a common need, that of affecting
|
||||
how a migration runs based on command line switches.
|
||||
|
||||
The technique to use here is simple; within a migration script, inspect
|
||||
the :meth:`.EnvironmentContext.get_x_argument` collection for any additional,
|
||||
user-defined parameters. Then take action based on the presence of those
|
||||
arguments.
|
||||
|
||||
To make it such that the logic to inspect these flags is easy to use and
|
||||
modify, we modify our ``script.py.mako`` template to make this feature
|
||||
available in all new revision files:
|
||||
|
||||
.. code-block:: mako
|
||||
|
||||
"""${message}
|
||||
|
||||
Revision ID: ${up_revision}
|
||||
Revises: ${down_revision}
|
||||
Create Date: ${create_date}
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = ${repr(up_revision)}
|
||||
down_revision = ${repr(down_revision)}
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
${imports if imports else ""}
|
||||
|
||||
from alembic import context
|
||||
|
||||
|
||||
def upgrade():
|
||||
schema_upgrades()
|
||||
if context.get_x_argument(as_dictionary=True).get('data', None):
|
||||
data_upgrades()
|
||||
|
||||
def downgrade():
|
||||
if context.get_x_argument(as_dictionary=True).get('data', None):
|
||||
data_downgrades()
|
||||
schema_downgrades()
|
||||
|
||||
def schema_upgrades():
|
||||
"""schema upgrade migrations go here."""
|
||||
${upgrades if upgrades else "pass"}
|
||||
|
||||
def schema_downgrades():
|
||||
"""schema downgrade migrations go here."""
|
||||
${downgrades if downgrades else "pass"}
|
||||
|
||||
def data_upgrades():
|
||||
"""Add any optional data upgrade migrations here!"""
|
||||
pass
|
||||
|
||||
def data_downgrades():
|
||||
"""Add any optional data downgrade migrations here!"""
|
||||
pass
|
||||
|
||||
Now, when we create a new migration file, the ``data_upgrades()`` and ``data_downgrades()``
|
||||
placeholders will be available, where we can add optional data migrations::
|
||||
|
||||
"""rev one
|
||||
|
||||
Revision ID: 3ba2b522d10d
|
||||
Revises: None
|
||||
Create Date: 2014-03-04 18:05:36.992867
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '3ba2b522d10d'
|
||||
down_revision = None
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
from sqlalchemy import String, Column
|
||||
from sqlalchemy.sql import table, column
|
||||
|
||||
from alembic import context
|
||||
|
||||
def upgrade():
|
||||
schema_upgrades()
|
||||
if context.get_x_argument(as_dictionary=True).get('data', None):
|
||||
data_upgrades()
|
||||
|
||||
def downgrade():
|
||||
if context.get_x_argument(as_dictionary=True).get('data', None):
|
||||
data_downgrades()
|
||||
schema_downgrades()
|
||||
|
||||
def schema_upgrades():
|
||||
"""schema upgrade migrations go here."""
|
||||
op.create_table("my_table", Column('data', String))
|
||||
|
||||
def schema_downgrades():
|
||||
"""schema downgrade migrations go here."""
|
||||
op.drop_table("my_table")
|
||||
|
||||
def data_upgrades():
|
||||
"""Add any optional data upgrade migrations here!"""
|
||||
|
||||
my_table = table('my_table',
|
||||
column('data', String),
|
||||
)
|
||||
|
||||
op.bulk_insert(my_table,
|
||||
[
|
||||
{'data': 'data 1'},
|
||||
{'data': 'data 2'},
|
||||
{'data': 'data 3'},
|
||||
]
|
||||
)
|
||||
|
||||
def data_downgrades():
|
||||
"""Add any optional data downgrade migrations here!"""
|
||||
|
||||
op.execute("delete from my_table")
|
||||
|
||||
To invoke our migrations with data included, we use the ``-x`` flag::
|
||||
|
||||
alembic -x data=true upgrade head
|
||||
|
||||
The :meth:`.EnvironmentContext.get_x_argument` is an easy way to support
|
||||
new commandline options within environment and migration scripts.
|
||||
|
||||
.. _connection_sharing:
|
||||
|
||||
Sharing a Connection with a Series of Migration Commands and Environments
|
||||
=========================================================================
|
||||
|
||||
It is often the case that an application will need to call upon a series
|
||||
of commands within :ref:`alembic.command.toplevel`, where it would be advantageous
|
||||
for all operations to proceed along a single transaction. The connectivity
|
||||
for a migration is typically solely determined within the ``env.py`` script
|
||||
of a migration environment, which is called within the scope of a command.
|
||||
|
||||
The steps to take here are:
|
||||
|
||||
1. Produce the :class:`~sqlalchemy.engine.Connection` object to use.
|
||||
|
||||
2. Place it somewhere that ``env.py`` will be able to access it. This
|
||||
can be either a. a module-level global somewhere, or b.
|
||||
an attribute which we place into the :attr:`.Config.attributes`
|
||||
dictionary (if we are on an older Alembic version, we may also attach
|
||||
an attribute directly to the :class:`.Config` object).
|
||||
|
||||
3. The ``env.py`` script is modified such that it looks for this
|
||||
:class:`~sqlalchemy.engine.Connection` and makes use of it, in lieu
|
||||
of building up its own :class:`~sqlalchemy.engine.Engine` instance.
|
||||
|
||||
We illustrate using :attr:`.Config.attributes`::
|
||||
|
||||
from alembic import command, config
|
||||
|
||||
cfg = config.Config("/path/to/yourapp/alembic.ini")
|
||||
with engine.begin() as connection:
|
||||
cfg.attributes['connection'] = connection
|
||||
command.upgrade(cfg, "head")
|
||||
|
||||
Then in ``env.py``::
|
||||
|
||||
def run_migrations_online():
|
||||
connectable = config.attributes.get('connection', None)
|
||||
|
||||
if connectable is None:
|
||||
# only create Engine if we don't have a Connection
|
||||
# from the outside
|
||||
connectable = engine_from_config(
|
||||
config.get_section(config.config_ini_section),
|
||||
prefix='sqlalchemy.',
|
||||
poolclass=pool.NullPool)
|
||||
|
||||
# when connectable is already a Connection object, calling
|
||||
# connect() gives us a *branched connection*.
|
||||
|
||||
with connectable.connect() as connection:
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
.. topic:: Branched Connections
|
||||
|
||||
Note that we are calling the ``connect()`` method, **even if we are
|
||||
using a** :class:`~sqlalchemy.engine.Connection` **object to start with**.
|
||||
The effect this has when calling :meth:`~sqlalchemy.engine.Connection.connect`
|
||||
is that SQLAlchemy passes us a **branch** of the original connection; it
|
||||
is in every way the same as the :class:`~sqlalchemy.engine.Connection`
|
||||
we started with, except it provides **nested scope**; the
|
||||
context we have here as well as the
|
||||
:meth:`~sqlalchemy.engine.Connection.close` method of this branched
|
||||
connection doesn't actually close the outer connection, which stays
|
||||
active for continued use.
|
||||
|
||||
.. versionadded:: 0.7.5 Added :attr:`.Config.attributes`.
|
||||
|
||||
.. _replaceable_objects:
|
||||
|
||||
Replaceable Objects
|
||||
===================
|
||||
|
||||
This recipe proposes a hypothetical way of dealing with
|
||||
what we might call a *replaceable* schema object. A replaceable object
|
||||
is a schema object that needs to be created and dropped all at once.
|
||||
Examples of such objects include views, stored procedures, and triggers.
|
||||
|
||||
Replaceable objects present a problem in that in order to make incremental
|
||||
changes to them, we have to refer to the whole definition at once.
|
||||
If we need to add a new column to a view, for example, we have to drop
|
||||
it entirely and recreate it fresh with the extra column added, referring to
|
||||
the whole structure; but to make it even tougher, if we wish to support
|
||||
downgrade operarations in our migration scripts,
|
||||
we need to refer to the *previous* version of that
|
||||
construct fully, and we'd much rather not have to type out the whole
|
||||
definition in multiple places.
|
||||
|
||||
This recipe proposes that we may refer to the older version of a
|
||||
replaceable construct by directly naming the migration version in
|
||||
which it was created, and having a migration refer to that previous
|
||||
file as migrations run. We will also demonstrate how to integrate this
|
||||
logic within the :ref:`operation_plugins` feature introduced in
|
||||
Alembic 0.8. It may be very helpful to review
|
||||
this section first to get an overview of this API.
|
||||
|
||||
The Replaceable Object Structure
|
||||
--------------------------------
|
||||
|
||||
We first need to devise a simple format that represents the "CREATE XYZ" /
|
||||
"DROP XYZ" aspect of what it is we're building. We will work with an object
|
||||
that represents a textual definition; while a SQL view is an object that we can define
|
||||
using a `table-metadata-like system <https://bitbucket.org/zzzeek/sqlalchemy/wiki/UsageRecipes/Views>`_,
|
||||
this is not so much the case for things like stored procedures, where
|
||||
we pretty much need to have a full string definition written down somewhere.
|
||||
We'll use a simple value object called ``ReplaceableObject`` that can
|
||||
represent any named set of SQL text to send to a "CREATE" statement of
|
||||
some kind::
|
||||
|
||||
class ReplaceableObject(object):
|
||||
def __init__(self, name, sqltext):
|
||||
self.name = name
|
||||
self.sqltext = sqltext
|
||||
|
||||
Using this object in a migration script, assuming a Postgresql-style
|
||||
syntax, looks like::
|
||||
|
||||
customer_view = ReplaceableObject(
|
||||
"customer_view",
|
||||
"SELECT name, order_count FROM customer WHERE order_count > 0"
|
||||
)
|
||||
|
||||
add_customer_sp = ReplaceableObject(
|
||||
"add_customer_sp(name varchar, order_count integer)",
|
||||
"""
|
||||
RETURNS integer AS $$
|
||||
BEGIN
|
||||
insert into customer (name, order_count)
|
||||
VALUES (in_name, in_order_count);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
"""
|
||||
)
|
||||
|
||||
The ``ReplaceableObject`` class is only one very simplistic way to do this.
|
||||
The structure of how we represent our schema objects
|
||||
is not too important for the purposes of this example; we can just
|
||||
as well put strings inside of tuples or dictionaries, as well as
|
||||
that we could define any kind of series of fields and class structures we want.
|
||||
The only important part is that below we will illustrate how organize the
|
||||
code that can consume the structure we create here.
|
||||
|
||||
Create Operations for the Target Objects
|
||||
----------------------------------------
|
||||
|
||||
We'll use the :class:`.Operations` extension API to make new operations
|
||||
for create, drop, and replace of views and stored procedures. Using this
|
||||
API is also optional; we can just as well make any kind of Python
|
||||
function that we would invoke from our migration scripts.
|
||||
However, using this API gives us operations
|
||||
built directly into the Alembic ``op.*`` namespace very nicely.
|
||||
|
||||
The most intricate class is below. This is the base of our "replaceable"
|
||||
operation, which includes not just a base operation for emitting
|
||||
CREATE and DROP instructions on a ``ReplaceableObject``, it also assumes
|
||||
a certain model of "reversibility" which makes use of references to
|
||||
other migration files in order to refer to the "previous" version
|
||||
of an object::
|
||||
|
||||
from alembic.operations import Operations, MigrateOperation
|
||||
|
||||
class ReversibleOp(MigrateOperation):
|
||||
def __init__(self, target):
|
||||
self.target = target
|
||||
|
||||
@classmethod
|
||||
def invoke_for_target(cls, operations, target):
|
||||
op = cls(target)
|
||||
return operations.invoke(op)
|
||||
|
||||
def reverse(self):
|
||||
raise NotImplementedError()
|
||||
|
||||
@classmethod
|
||||
def _get_object_from_version(cls, operations, ident):
|
||||
version, objname = ident.split(".")
|
||||
|
||||
module = operations.get_context().script.get_revision(version).module
|
||||
obj = getattr(module, objname)
|
||||
return obj
|
||||
|
||||
@classmethod
|
||||
def replace(cls, operations, target, replaces=None, replace_with=None):
|
||||
|
||||
if replaces:
|
||||
old_obj = cls._get_object_from_version(operations, replaces)
|
||||
drop_old = cls(old_obj).reverse()
|
||||
create_new = cls(target)
|
||||
elif replace_with:
|
||||
old_obj = cls._get_object_from_version(operations, replace_with)
|
||||
drop_old = cls(target).reverse()
|
||||
create_new = cls(old_obj)
|
||||
else:
|
||||
raise TypeError("replaces or replace_with is required")
|
||||
|
||||
operations.invoke(drop_old)
|
||||
operations.invoke(create_new)
|
||||
|
||||
The workings of this class should become clear as we walk through the
|
||||
example. To create usable operations from this base, we will build
|
||||
a series of stub classes and use :meth:`.Operations.register_operation`
|
||||
to make them part of the ``op.*`` namespace::
|
||||
|
||||
@Operations.register_operation("create_view", "invoke_for_target")
|
||||
@Operations.register_operation("replace_view", "replace")
|
||||
class CreateViewOp(ReversibleOp):
|
||||
def reverse(self):
|
||||
return DropViewOp(self.target)
|
||||
|
||||
|
||||
@Operations.register_operation("drop_view", "invoke_for_target")
|
||||
class DropViewOp(ReversibleOp):
|
||||
def reverse(self):
|
||||
return CreateViewOp(self.view)
|
||||
|
||||
|
||||
@Operations.register_operation("create_sp", "invoke_for_target")
|
||||
@Operations.register_operation("replace_sp", "replace")
|
||||
class CreateSPOp(ReversibleOp):
|
||||
def reverse(self):
|
||||
return DropSPOp(self.target)
|
||||
|
||||
|
||||
@Operations.register_operation("drop_sp", "invoke_for_target")
|
||||
class DropSPOp(ReversibleOp):
|
||||
def reverse(self):
|
||||
return CreateSPOp(self.target)
|
||||
|
||||
To actually run the SQL like "CREATE VIEW" and "DROP SEQUENCE", we'll provide
|
||||
implementations using :meth:`.Operations.implementation_for`
|
||||
that run straight into :meth:`.Operations.execute`::
|
||||
|
||||
@Operations.implementation_for(CreateViewOp)
|
||||
def create_view(operations, operation):
|
||||
operations.execute("CREATE VIEW %s AS %s" % (
|
||||
operation.target.name,
|
||||
operation.target.sqltext
|
||||
))
|
||||
|
||||
|
||||
@Operations.implementation_for(DropViewOp)
|
||||
def drop_view(operations, operation):
|
||||
operations.execute("DROP VIEW %s" % operation.target.name)
|
||||
|
||||
|
||||
@Operations.implementation_for(CreateSPOp)
|
||||
def create_sp(operations, operation):
|
||||
operations.execute(
|
||||
"CREATE FUNCTION %s %s" % (
|
||||
operation.target.name, operation.target.sqltext
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@Operations.implementation_for(DropSPOp)
|
||||
def drop_sp(operations, operation):
|
||||
operations.execute("DROP FUNCTION %s" % operation.target.name)
|
||||
|
||||
All of the above code can be present anywhere within an application's
|
||||
source tree; the only requirement is that when the ``env.py`` script is
|
||||
invoked, it includes imports that ultimately call upon these classes
|
||||
as well as the :meth:`.Operations.register_operation` and
|
||||
:meth:`.Operations.implementation_for` sequences.
|
||||
|
||||
Create Initial Migrations
|
||||
-------------------------
|
||||
|
||||
We can now illustrate how these objects look during use. For the first step,
|
||||
we'll create a new migration to create a "customer" table::
|
||||
|
||||
$ alembic revision -m "create table"
|
||||
|
||||
We build the first revision as follows::
|
||||
|
||||
"""create table
|
||||
|
||||
Revision ID: 3ab8b2dfb055
|
||||
Revises:
|
||||
Create Date: 2015-07-27 16:22:44.918507
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '3ab8b2dfb055'
|
||||
down_revision = None
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
def upgrade():
|
||||
op.create_table(
|
||||
"customer",
|
||||
sa.Column('id', sa.Integer, primary_key=True),
|
||||
sa.Column('name', sa.String),
|
||||
sa.Column('order_count', sa.Integer),
|
||||
)
|
||||
|
||||
|
||||
def downgrade():
|
||||
op.drop_table('customer')
|
||||
|
||||
For the second migration, we will create a view and a stored procedure
|
||||
which act upon this table::
|
||||
|
||||
$ alembic revision -m "create views/sp"
|
||||
|
||||
This migration will use the new directives::
|
||||
|
||||
"""create views/sp
|
||||
|
||||
Revision ID: 28af9800143f
|
||||
Revises: 3ab8b2dfb055
|
||||
Create Date: 2015-07-27 16:24:03.589867
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '28af9800143f'
|
||||
down_revision = '3ab8b2dfb055'
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
from foo import ReplaceableObject
|
||||
|
||||
customer_view = ReplaceableObject(
|
||||
"customer_view",
|
||||
"SELECT name, order_count FROM customer WHERE order_count > 0"
|
||||
)
|
||||
|
||||
add_customer_sp = ReplaceableObject(
|
||||
"add_customer_sp(name varchar, order_count integer)",
|
||||
"""
|
||||
RETURNS integer AS $$
|
||||
BEGIN
|
||||
insert into customer (name, order_count)
|
||||
VALUES (in_name, in_order_count);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
"""
|
||||
)
|
||||
|
||||
|
||||
def upgrade():
|
||||
op.create_view(customer_view)
|
||||
op.create_sp(add_customer_sp)
|
||||
|
||||
|
||||
def downgrade():
|
||||
op.drop_view(customer_view)
|
||||
op.drop_sp(add_customer_sp)
|
||||
|
||||
|
||||
We see the use of our new ``create_view()``, ``create_sp()``,
|
||||
``drop_view()``, and ``drop_sp()`` directives. Running these to "head"
|
||||
we get the following (this includes an edited view of SQL emitted)::
|
||||
|
||||
$ alembic upgrade 28af9800143
|
||||
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
|
||||
INFO [alembic.runtime.migration] Will assume transactional DDL.
|
||||
INFO [sqlalchemy.engine.base.Engine] BEGIN (implicit)
|
||||
INFO [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
|
||||
INFO [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
|
||||
INFO [sqlalchemy.engine.base.Engine] SELECT alembic_version.version_num
|
||||
FROM alembic_version
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
|
||||
INFO [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
|
||||
INFO [alembic.runtime.migration] Running upgrade -> 3ab8b2dfb055, create table
|
||||
INFO [sqlalchemy.engine.base.Engine]
|
||||
CREATE TABLE customer (
|
||||
id SERIAL NOT NULL,
|
||||
name VARCHAR,
|
||||
order_count INTEGER,
|
||||
PRIMARY KEY (id)
|
||||
)
|
||||
|
||||
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] INSERT INTO alembic_version (version_num) VALUES ('3ab8b2dfb055')
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [alembic.runtime.migration] Running upgrade 3ab8b2dfb055 -> 28af9800143f, create views/sp
|
||||
INFO [sqlalchemy.engine.base.Engine] CREATE VIEW customer_view AS SELECT name, order_count FROM customer WHERE order_count > 0
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] CREATE FUNCTION add_customer_sp(name varchar, order_count integer)
|
||||
RETURNS integer AS $$
|
||||
BEGIN
|
||||
insert into customer (name, order_count)
|
||||
VALUES (in_name, in_order_count);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='28af9800143f' WHERE alembic_version.version_num = '3ab8b2dfb055'
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] COMMIT
|
||||
|
||||
We see that our CREATE TABLE proceeded as well as the CREATE VIEW and CREATE
|
||||
FUNCTION operations produced by our new directives.
|
||||
|
||||
|
||||
Create Revision Migrations
|
||||
--------------------------
|
||||
|
||||
Finally, we can illustrate how we would "revise" these objects.
|
||||
Let's consider we added a new column ``email`` to our ``customer`` table::
|
||||
|
||||
$ alembic revision -m "add email col"
|
||||
|
||||
The migration is::
|
||||
|
||||
"""add email col
|
||||
|
||||
Revision ID: 191a2d20b025
|
||||
Revises: 28af9800143f
|
||||
Create Date: 2015-07-27 16:25:59.277326
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '191a2d20b025'
|
||||
down_revision = '28af9800143f'
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
|
||||
def upgrade():
|
||||
op.add_column("customer", sa.Column("email", sa.String()))
|
||||
|
||||
|
||||
def downgrade():
|
||||
op.drop_column("customer", "email")
|
||||
|
||||
|
||||
We now need to recreate the ``customer_view`` view and the
|
||||
``add_customer_sp`` function. To include downgrade capability, we will
|
||||
need to refer to the **previous** version of the construct; the
|
||||
``replace_view()`` and ``replace_sp()`` operations we've created make
|
||||
this possible, by allowing us to refer to a specific, previous revision.
|
||||
the ``replaces`` and ``replace_with`` arguments accept a dot-separated
|
||||
string, which refers to a revision number and an object name, such
|
||||
as ``"28af9800143f.customer_view"``. The ``ReversibleOp`` class makes use
|
||||
of the :meth:`.Operations.get_context` method to locate the version file
|
||||
we refer to::
|
||||
|
||||
$ alembic revision -m "update views/sp"
|
||||
|
||||
The migration::
|
||||
|
||||
"""update views/sp
|
||||
|
||||
Revision ID: 199028bf9856
|
||||
Revises: 191a2d20b025
|
||||
Create Date: 2015-07-27 16:26:31.344504
|
||||
|
||||
"""
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = '199028bf9856'
|
||||
down_revision = '191a2d20b025'
|
||||
branch_labels = None
|
||||
depends_on = None
|
||||
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
|
||||
from foo import ReplaceableObject
|
||||
|
||||
customer_view = ReplaceableObject(
|
||||
"customer_view",
|
||||
"SELECT name, order_count, email "
|
||||
"FROM customer WHERE order_count > 0"
|
||||
)
|
||||
|
||||
add_customer_sp = ReplaceableObject(
|
||||
"add_customer_sp(name varchar, order_count integer, email varchar)",
|
||||
"""
|
||||
RETURNS integer AS $$
|
||||
BEGIN
|
||||
insert into customer (name, order_count, email)
|
||||
VALUES (in_name, in_order_count, email);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
"""
|
||||
)
|
||||
|
||||
|
||||
def upgrade():
|
||||
op.replace_view(customer_view, replaces="28af9800143f.customer_view")
|
||||
op.replace_sp(add_customer_sp, replaces="28af9800143f.add_customer_sp")
|
||||
|
||||
|
||||
def downgrade():
|
||||
op.replace_view(customer_view, replace_with="28af9800143f.customer_view")
|
||||
op.replace_sp(add_customer_sp, replace_with="28af9800143f.add_customer_sp")
|
||||
|
||||
Above, instead of using ``create_view()``, ``create_sp()``,
|
||||
``drop_view()``, and ``drop_sp()`` methods, we now use ``replace_view()`` and
|
||||
``replace_sp()``. The replace operation we've built always runs a DROP *and*
|
||||
a CREATE. Running an upgrade to head we see::
|
||||
|
||||
$ alembic upgrade head
|
||||
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
|
||||
INFO [alembic.runtime.migration] Will assume transactional DDL.
|
||||
INFO [sqlalchemy.engine.base.Engine] BEGIN (implicit)
|
||||
INFO [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
|
||||
INFO [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
|
||||
INFO [sqlalchemy.engine.base.Engine] SELECT alembic_version.version_num
|
||||
FROM alembic_version
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [alembic.runtime.migration] Running upgrade 28af9800143f -> 191a2d20b025, add email col
|
||||
INFO [sqlalchemy.engine.base.Engine] ALTER TABLE customer ADD COLUMN email VARCHAR
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='191a2d20b025' WHERE alembic_version.version_num = '28af9800143f'
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [alembic.runtime.migration] Running upgrade 191a2d20b025 -> 199028bf9856, update views/sp
|
||||
INFO [sqlalchemy.engine.base.Engine] DROP VIEW customer_view
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] CREATE VIEW customer_view AS SELECT name, order_count, email FROM customer WHERE order_count > 0
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] DROP FUNCTION add_customer_sp(name varchar, order_count integer)
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] CREATE FUNCTION add_customer_sp(name varchar, order_count integer, email varchar)
|
||||
RETURNS integer AS $$
|
||||
BEGIN
|
||||
insert into customer (name, order_count, email)
|
||||
VALUES (in_name, in_order_count, email);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='199028bf9856' WHERE alembic_version.version_num = '191a2d20b025'
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] COMMIT
|
||||
|
||||
After adding our new ``email`` column, we see that both ``customer_view``
|
||||
and ``add_customer_sp()`` are dropped before the new version is created.
|
||||
If we downgrade back to the old version, we see the old version of these
|
||||
recreated again within the downgrade for this migration::
|
||||
|
||||
$ alembic downgrade 28af9800143
|
||||
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
|
||||
INFO [alembic.runtime.migration] Will assume transactional DDL.
|
||||
INFO [sqlalchemy.engine.base.Engine] BEGIN (implicit)
|
||||
INFO [sqlalchemy.engine.base.Engine] select relname from pg_class c join pg_namespace n on n.oid=c.relnamespace where pg_catalog.pg_table_is_visible(c.oid) and relname=%(name)s
|
||||
INFO [sqlalchemy.engine.base.Engine] {'name': u'alembic_version'}
|
||||
INFO [sqlalchemy.engine.base.Engine] SELECT alembic_version.version_num
|
||||
FROM alembic_version
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [alembic.runtime.migration] Running downgrade 199028bf9856 -> 191a2d20b025, update views/sp
|
||||
INFO [sqlalchemy.engine.base.Engine] DROP VIEW customer_view
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] CREATE VIEW customer_view AS SELECT name, order_count FROM customer WHERE order_count > 0
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] DROP FUNCTION add_customer_sp(name varchar, order_count integer, email varchar)
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] CREATE FUNCTION add_customer_sp(name varchar, order_count integer)
|
||||
RETURNS integer AS $$
|
||||
BEGIN
|
||||
insert into customer (name, order_count)
|
||||
VALUES (in_name, in_order_count);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='191a2d20b025' WHERE alembic_version.version_num = '199028bf9856'
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [alembic.runtime.migration] Running downgrade 191a2d20b025 -> 28af9800143f, add email col
|
||||
INFO [sqlalchemy.engine.base.Engine] ALTER TABLE customer DROP COLUMN email
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] UPDATE alembic_version SET version_num='28af9800143f' WHERE alembic_version.version_num = '191a2d20b025'
|
||||
INFO [sqlalchemy.engine.base.Engine] {}
|
||||
INFO [sqlalchemy.engine.base.Engine] COMMIT
|
||||
|
||||
Don't Generate Empty Migrations with Autogenerate
|
||||
=================================================
|
||||
|
||||
A common request is to have the ``alembic revision --autogenerate`` command not
|
||||
actually generate a revision file if no changes to the schema is detected. Using
|
||||
the :paramref:`.EnvironmentContext.configure.process_revision_directives`
|
||||
hook, this is straightforward; place a ``process_revision_directives``
|
||||
hook in :meth:`.MigrationContext.configure` which removes the
|
||||
single :class:`.MigrationScript` directive if it is empty of
|
||||
any operations::
|
||||
|
||||
|
||||
def run_migrations_online():
|
||||
|
||||
# ...
|
||||
|
||||
def process_revision_directives(context, revision, directives):
|
||||
if config.cmd_opts.autogenerate:
|
||||
script = directives[0]
|
||||
if script.upgrade_ops.is_empty():
|
||||
directives[:] = []
|
||||
|
||||
|
||||
# connectable = ...
|
||||
|
||||
with connectable.connect() as connection:
|
||||
context.configure(
|
||||
connection=connection,
|
||||
target_metadata=target_metadata,
|
||||
process_revision_directives=process_revision_directives
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
Don't emit CREATE TABLE statements for Views
|
||||
============================================
|
||||
|
||||
It is sometimes convenient to create :class:`~sqlalchemy.schema.Table` instances for views
|
||||
so that they can be queried using normal SQLAlchemy techniques. Unfortunately this
|
||||
causes Alembic to treat them as tables in need of creation and to generate spurious
|
||||
``create_table()`` operations. This is easily fixable by flagging such Tables and using the
|
||||
:paramref:`~.EnvironmentContext.configure.include_object` hook to exclude them::
|
||||
|
||||
my_view = Table('my_view', metadata, autoload=True, info=dict(is_view=True)) # Flag this as a view
|
||||
|
||||
Then define ``include_object`` as::
|
||||
|
||||
def include_object(object, name, type_, reflected, compare_to):
|
||||
"""
|
||||
Exclude views from Alembic's consideration.
|
||||
"""
|
||||
|
||||
return not object.info.get('is_view', False)
|
||||
|
||||
Finally, in ``env.py`` pass your ``include_object`` as a keyword argument to :meth:`.EnvironmentContext.configure`.
|
||||
|
||||
.. _multiple_environments:
|
||||
|
||||
Run Multiple Alembic Environments from one .ini file
|
||||
====================================================
|
||||
|
||||
Long before Alembic had the "multiple bases" feature described in :ref:`multiple_bases`,
|
||||
projects had a need to maintain more than one Alembic version history in a single
|
||||
project, where these version histories are completely independent of each other
|
||||
and each refer to their own alembic_version table, either across multiple databases,
|
||||
schemas, or namespaces. A simple approach was added to support this, the
|
||||
``--name`` flag on the commandline.
|
||||
|
||||
First, one would create an alembic.ini file of this form::
|
||||
|
||||
[DEFAULT]
|
||||
# all defaults shared between environments go here
|
||||
|
||||
sqlalchemy.url = postgresql://scott:tiger@hostname/mydatabase
|
||||
|
||||
|
||||
[schema1]
|
||||
# path to env.py and migration scripts for schema1
|
||||
script_location = myproject/revisions/schema1
|
||||
|
||||
[schema2]
|
||||
# path to env.py and migration scripts for schema2
|
||||
script_location = myproject/revisions/schema2
|
||||
|
||||
[schema3]
|
||||
# path to env.py and migration scripts for schema3
|
||||
script_location = myproject/revisions/db2
|
||||
|
||||
# this schema uses a different database URL as well
|
||||
sqlalchemy.url = postgresql://scott:tiger@hostname/myotherdatabase
|
||||
|
||||
|
||||
Above, in the ``[DEFAULT]`` section we set up a default database URL.
|
||||
Then we create three sections corresponding to different revision lineages
|
||||
in our project. Each of these directories would have its own ``env.py``
|
||||
and set of versioning files. Then when we run the ``alembic`` command,
|
||||
we simply give it the name of the configuration we want to use::
|
||||
|
||||
alembic --name schema2 revision -m "new rev for schema 2" --autogenerate
|
||||
|
||||
Above, the ``alembic`` command makes use of the configuration in ``[schema2]``,
|
||||
populated with defaults from the ``[DEFAULT]`` section.
|
||||
|
||||
The above approach can be automated by creating a custom front-end to the
|
||||
Alembic commandline as well.
|
||||
|
|
@ -1,77 +0,0 @@
|
|||
============
|
||||
Front Matter
|
||||
============
|
||||
|
||||
Information about the Alembic project.
|
||||
|
||||
Project Homepage
|
||||
================
|
||||
|
||||
Alembic is hosted on `Bitbucket <http://bitbucket.org>`_ - the lead project
|
||||
page is at https://bitbucket.org/zzzeek/alembic. Source code is tracked here
|
||||
using `Git <http://git-scm.com/>`_.
|
||||
|
||||
.. versionchanged:: 0.6
|
||||
The source repository was moved from Mercurial to Git.
|
||||
|
||||
Releases and project status are available on Pypi at
|
||||
http://pypi.python.org/pypi/alembic.
|
||||
|
||||
The most recent published version of this documentation should be at
|
||||
http://alembic.zzzcomputing.com/.
|
||||
|
||||
Project Status
|
||||
==============
|
||||
|
||||
Alembic is currently in beta status and is expected to be fairly
|
||||
stable. Users should take care to report bugs and missing features
|
||||
(see :ref:`bugs`) on an as-needed
|
||||
basis. It should be expected that the development version may be required
|
||||
for proper implementation of recently repaired issues in between releases;
|
||||
the latest master is always available at https://bitbucket.org/zzzeek/alembic/get/master.tar.gz.
|
||||
|
||||
.. _installation:
|
||||
|
||||
Installation
|
||||
============
|
||||
|
||||
Install released versions of Alembic from the Python package index with `pip <http://pypi.python.org/pypi/pip>`_ or a similar tool::
|
||||
|
||||
pip install alembic
|
||||
|
||||
Installation via source distribution is via the ``setup.py`` script::
|
||||
|
||||
python setup.py install
|
||||
|
||||
The install will add the ``alembic`` command to the environment. All operations with Alembic
|
||||
then proceed through the usage of this command.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
Alembic's install process will ensure that SQLAlchemy_
|
||||
is installed, in addition to other dependencies. Alembic will work with
|
||||
SQLAlchemy as of version **0.7.3**, however more features are available with
|
||||
newer versions such as the 0.9 or 1.0 series.
|
||||
|
||||
Alembic supports Python versions 2.6 and above.
|
||||
|
||||
Community
|
||||
=========
|
||||
|
||||
Alembic is developed by `Mike Bayer <http://techspot.zzzeek.org>`_, and is
|
||||
loosely associated with the SQLAlchemy_, `Pylons <http://www.pylonsproject.org>`_,
|
||||
and `Openstack <http://www.openstack.org>`_ projects.
|
||||
|
||||
User issues, discussion of potential bugs and features should be posted
|
||||
to the Alembic Google Group at `sqlalchemy-alembic <https://groups.google.com/group/sqlalchemy-alembic>`_.
|
||||
|
||||
.. _bugs:
|
||||
|
||||
Bugs
|
||||
====
|
||||
Bugs and feature enhancements to Alembic should be reported on the `Bitbucket
|
||||
issue tracker <https://bitbucket.org/zzzeek/alembic/issues?status=new&status=open>`_.
|
||||
|
||||
|
||||
.. _SQLAlchemy: http://www.sqlalchemy.org
|
|
@ -1,29 +0,0 @@
|
|||
===================================
|
||||
Welcome to Alembic's documentation!
|
||||
===================================
|
||||
|
||||
`Alembic <http://bitbucket.org/zzzeek/alembic>`_ is a lightweight database migration tool for usage
|
||||
with the `SQLAlchemy <http://www.sqlalchemy.org>`_ Database Toolkit for Python.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 3
|
||||
|
||||
front
|
||||
tutorial
|
||||
autogenerate
|
||||
offline
|
||||
naming
|
||||
batch
|
||||
branches
|
||||
ops
|
||||
cookbook
|
||||
api/index
|
||||
changelog
|
||||
|
||||
Indices and tables
|
||||
==================
|
||||
|
||||
* :ref:`genindex`
|
||||
* :ref:`modindex`
|
||||
* :ref:`search`
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue