What’s new in SQLAlchemy 0.4?

    This document describes changes between SQLAlchemy version 0.3, last released October 14, 2007, and SQLAlchemy version 0.4, last released October 12, 2008.

    Document date: March 21, 2008

    If you’re using any ORM features, make sure you import from sqlalchemy.orm:

    Secondly, anywhere you used to say engine=, connectable=, bind_to=, something.engine, metadata.connect(), use bind:

    1. myengine = create_engine("sqlite://")
    2. meta = MetaData(myengine)
    3. meta2 = MetaData()
    4. meta2.bind = myengine
    5. session = create_session(bind=myengine)
    6. statement = select([table], bind=myengine)

    Got those ? Good! You’re now (95%) 0.4 compatible. If you’re using 0.3.10, you can make these changes immediately; they’ll work there too.

    Module Imports

    In 0.3, “from sqlalchemy import *” would import all of sqlalchemy’s sub-modules into your namespace. Version 0.4 no longer imports sub-modules into the namespace. This may mean you need to add extra imports into your code.

    In 0.3, this code worked:

    1. from sqlalchemy import *
    2. class UTCDateTime(types.TypeDecorator):
    3. pass

    In 0.4, one must do:

    1. from sqlalchemy import *
    2. from sqlalchemy import types
    3. class UTCDateTime(types.TypeDecorator):
    4. pass

    New Query API

    Query is standardized on the generative interface (old interface is still there, just deprecated). While most of the generative interface is available in 0.3, the 0.4 Query has the inner guts to match the generative outside, and has a lot more tricks. All result narrowing is via filter() and filter_by(), limiting/offset is either through array slices or limit()/offset(), joining is via join() and outerjoin() (or more manually, through select_from() as well as manually-formed criteria).

    To avoid deprecation warnings, you must make some changes to your 03 code

    User.query.get_by( **kwargs )

    1. User.query.filter_by(**kwargs).first()

    User.query.select_by( **kwargs )

    1. User.query.filter_by(**kwargs).all()

    User.query.select()

    1. User.query.filter(xxx).all()

    New Property-Based Expression Constructs

    By far the most palpable difference within the ORM is that you can now construct your query criterion using class-based attributes directly. The “.c.” prefix is no longer needed when working with mapped classes:

    1. session.query(User).filter(and_(User.name == "fred", User.id > 17))

    While simple column-based comparisons are no big deal, the class attributes have some new “higher level” constructs available, including what was previously only available in filter_by():

    1. # comparison of scalar relations to an instance
    2. filter(Address.user == user)
    3. # return all users who contain a particular address
    4. filter(User.addresses.contains(address))
    5. # return all users who *dont* contain the address
    6. filter(~User.address.contains(address))
    7. # return all users who contain a particular address with
    8. # the email_address like '%foo%'
    9. filter(User.addresses.any(Address.email_address.like("%foo%")))
    10. # same, email address equals 'foo@bar.com'. can fall back to keyword
    11. # args for simple comparisons
    12. filter(User.addresses.any(email_address="foo@bar.com"))
    13. # return all Addresses whose user attribute has the username 'ed'
    14. filter(Address.user.has(name="ed"))
    15. # return all Addresses whose user attribute has the username 'ed'
    16. # and an id > 5 (mixing clauses with kwargs)
    17. filter(Address.user.has(User.id > 5, name="ed"))

    The Column collection remains available on mapped classes in the .c attribute. Note that property-based expressions are only available with mapped properties of mapped classes. .c is still used to access columns in regular tables and selectable objects produced from SQL Expressions.

    Automatic Join Aliasing

    We’ve had join() and outerjoin() for a while now:

    1. session.query(Order).join("items")

    Now you can alias them:

    1. session.query(Order).join("items", aliased=True).filter(Item.name="item 1").join(
    2. "items", aliased=True
    3. ).filter(Item.name == "item 3")

    The above will create two joins from orders->items using aliases. the filter() call subsequent to each will adjust its table criterion to that of the alias. To get at the Item objects, use add_entity() and target each join with an id:

    1. session.query(Order).join("items", id="j1", aliased=True).filter(
    2. Item.name == "item 1"
    3. ).join("items", aliased=True, id="j2").filter(Item.name == "item 3").add_entity(
    4. Item, id="j1"
    5. ).add_entity(
    6. Item, id="j2"
    7. )

    Returns tuples in the form: (Order, Item, Item).

    Self-referential Queries

    So query.join() can make aliases now. What does that give us ? Self-referential queries ! Joins can be done without any Alias objects:

    1. # standard self-referential TreeNode mapper with backref
    2. mapper(
    3. TreeNode,
    4. tree_nodes,
    5. properties={
    6. "children": relation(
    7. TreeNode, backref=backref("parent", remote_side=tree_nodes.id)
    8. },
    9. )
    10. # query for node with child containing "bar" two levels deep
    11. session.query(TreeNode).join(["children", "children"], aliased=True).filter_by(
    12. name="bar"
    13. )

    To add criterion for each table along the way in an aliased join, you can use from_joinpoint to keep joining against the same line of aliases:

    1. # search for the treenode along the path "n1/n12/n122"
    2. # first find a Node with name="n122"
    3. q = sess.query(Node).filter_by(name="n122")
    4. # then join to parent with "n12"
    5. q = q.join("parent", aliased=True).filter_by(name="n12")
    6. # join again to the next parent with 'n1'. use 'from_joinpoint'
    7. # so we join from the previous point, instead of joining off the
    8. # root table
    9. q = q.join("parent", aliased=True, from_joinpoint=True).filter_by(name="n1")
    10. node = q.first()

    query.populate_existing()

    The eager version of query.load() (or session.refresh()). Every instance loaded from the query, including all eagerly loaded items, get refreshed immediately if already present in the session:

    Relations

    SQL Clauses Embedded in Updates/Inserts

    For inline execution of SQL clauses, embedded right in the UPDATE or INSERT, during a flush():

    1. myobject.foo = mytable.c.value + 1
    2. user.pwhash = func.md5(password)
    3. order.hash = text("select hash from hashing_table")

    Self-referential and Cyclical Eager Loading

    Since our alias-fu has improved, relation() can join along the same table *any number of times*; you tell it how deep you want to go. Lets show the self-referential TreeNode more clearly:

    1. nodes = Table(
    2. metadata,
    3. Column("id", Integer, primary_key=True),
    4. Column("parent_id", Integer, ForeignKey("nodes.id")),
    5. Column("name", String(30)),
    6. )
    7. class TreeNode(object):
    8. pass
    9. mapper(
    10. TreeNode,
    11. nodes,
    12. properties={"children": relation(TreeNode, lazy=False, join_depth=3)},
    13. )

    So what happens when we say:

    1. create_session().query(TreeNode).all()

    ? A join along aliases, three levels deep off the parent:

    1. SELECT
    2. nodes_3.id AS nodes_3_id, nodes_3.parent_id AS nodes_3_parent_id, nodes_3.name AS nodes_3_name,
    3. nodes_2.id AS nodes_2_id, nodes_2.parent_id AS nodes_2_parent_id, nodes_2.name AS nodes_2_name,
    4. nodes_1.id AS nodes_1_id, nodes_1.parent_id AS nodes_1_parent_id, nodes_1.name AS nodes_1_name,
    5. nodes.id AS nodes_id, nodes.parent_id AS nodes_parent_id, nodes.name AS nodes_name
    6. FROM nodes LEFT OUTER JOIN nodes AS nodes_1 ON nodes.id = nodes_1.parent_id
    7. LEFT OUTER JOIN nodes AS nodes_2 ON nodes_1.id = nodes_2.parent_id
    8. LEFT OUTER JOIN nodes AS nodes_3 ON nodes_2.id = nodes_3.parent_id
    9. ORDER BY nodes.oid, nodes_1.oid, nodes_2.oid, nodes_3.oid

    Notice the nice clean alias names too. The joining doesn’t care if it’s against the same immediate table or some other object which then cycles back to the beginning. Any kind of chain of eager loads can cycle back onto itself when join_depth is specified. When not present, eager loading automatically stops when it hits a cycle.

    Composite Types

    This is one from the Hibernate camp. Composite Types let you define a custom datatype that is composed of more than one column (or one column, if you wanted). Lets define a new type, Point. Stores an x/y coordinate:

    1. class Point(object):
    2. def __init__(self, x, y):
    3. self.x = x
    4. self.y = y
    5. def __composite_values__(self):
    6. return self.x, self.y
    7. def __eq__(self, other):
    8. return other.x == self.x and other.y == self.y
    9. def __ne__(self, other):
    10. return not self.__eq__(other)

    The way the Point object is defined is specific to a custom type; constructor takes a list of arguments, and the __composite_values__() method produces a sequence of those arguments. The order will match up to our mapper, as we’ll see in a moment.

    Let’s create a table of vertices storing two points per row:

    1. vertices = Table(
    2. "vertices",
    3. metadata,
    4. Column("id", Integer, primary_key=True),
    5. Column("x1", Integer),
    6. Column("y1", Integer),
    7. Column("x2", Integer),
    8. Column("y2", Integer),
    9. )

    Then, map it ! We’ll create a Vertex object which stores two Point objects:

    1. class Vertex(object):
    2. def __init__(self, start, end):
    3. self.start = start
    4. self.end = end
    5. mapper(
    6. Vertex,
    7. vertices,
    8. properties={
    9. "start": composite(Point, vertices.c.x1, vertices.c.y1),
    10. "end": composite(Point, vertices.c.x2, vertices.c.y2),
    11. },
    12. )

    Once you’ve set up your composite type, it’s usable just like any other type:

    1. v = Vertex(Point(3, 4), Point(26, 15))
    2. session.save(v)
    3. session.flush()
    4. # works in queries too
    5. q = session.query(Vertex).filter(Vertex.start == Point(3, 4))

    If you’d like to define the way the mapped attributes generate SQL clauses when used in expressions, create your own sqlalchemy.orm.PropComparator subclass, defining any of the common operators (like __eq__(), __le__(), etc.), and send it in to composite(). Composite types work as primary keys too, and are usable in query.get():

    1. # a Document class which uses a composite Version
    2. # object as primary key
    3. document = query.get(Version(1, "a"))

    dynamic_loader() relations

    A relation() that returns a live Query object for all read operations. Write operations are limited to just append() and remove(), changes to the collection are not visible until the session is flushed. This feature is particularly handy with an “autoflushing” session which will flush before each query.

    1. mapper(
    2. Foo,
    3. foo_table,
    4. properties={
    5. "bars": dynamic_loader(
    6. Bar,
    7. backref="foo",
    8. # <other relation() opts>
    9. )
    10. },
    11. )
    12. session = create_session(autoflush=True)
    13. foo = session.query(Foo).first()
    14. for bar in foo.bars.filter(Bar.name == "lala"):
    15. print(bar)
    16. session.commit()

    New Options: undefer_group(), eagerload_all()

    A couple of query options which are handy. undefer_group() marks a whole group of “deferred” columns as undeferred:

    1. mapper(
    2. Class,
    3. table,
    4. properties={
    5. "foo": deferred(table.c.foo, group="group1"),
    6. "bar": deferred(table.c.bar, group="group1"),
    7. "bat": deferred(table.c.bat, group="group1"),
    8. },
    9. )
    10. session.query(Class).options(undefer_group("group1")).filter(...).all()

    and eagerload_all() sets a chain of attributes to be eager in one pass:

    1. mapper(Foo, foo_table, properties={"bar": relation(Bar)})
    2. mapper(Bar, bar_table, properties={"bat": relation(Bat)})
    3. mapper(Bat, bat_table)
    4. # eager load bar and bat
    5. session.query(Foo).options(eagerload_all("bar.bat")).filter(...).all()

    New Collection API

    Collections are no longer proxied by an {{{InstrumentedList}}} proxy, and access to members, methods and attributes is direct. Decorators now intercept objects entering and leaving the collection, and it is now possible to easily write a custom collection class that manages its own membership. Flexible decorators also replace the named method interface of custom collections in 0.3, allowing any class to be easily adapted to use as a collection container.

    Dictionary-based collections are now much easier to use and fully dict-like. Changing __iter__ is no longer needed for dict``s, and new built-in ``dict types cover many needs:

    1. # use a dictionary relation keyed by a column
    2. relation(Item, collection_class=column_mapped_collection(items.c.keyword))
    3. # or named attribute
    4. relation(Item, collection_class=attribute_mapped_collection("keyword"))
    5. # or any function you like
    6. relation(Item, collection_class=mapped_collection(lambda entity: entity.a + entity.b))

    Existing 0.3 dict-like and freeform object derived collection classes will need to be updated for the new API. In most cases this is simply a matter of adding a couple decorators to the class definition.

    Mapped Relations from External Tables/Subqueries

    This feature quietly appeared in 0.3 but has been improved in 0.4 thanks to better ability to convert subqueries against a table into subqueries against an alias of that table; this is key for eager loading, aliased joins in queries, etc. It reduces the need to create mappers against select statements when you just need to add some extra columns or subqueries:

    a typical query looks like:

    1. SELECT (SELECT count(1) FROM posts WHERE users.id = posts.user_id) AS count,
    2. users.firstname || users.lastname AS fullname,
    3. users.id AS users_id, users.firstname AS users_firstname, users.lastname AS users_lastname
    4. FROM users ORDER BY users.oid

    Horizontal Scaling (Sharding) API

    [browser:/sqlalchemy/trunk/examples/sharding/attribute_shard .py]

    Sessions

    New Session Create Paradigm; SessionContext, assignmapper Deprecated

    That’s right, the whole shebang is being replaced with two configurational functions. Using both will produce the most 0.1-ish feel we’ve had since 0.1 (i.e., the least amount of typing).

    Configure your own Session class right where you define your engine (or anywhere):

    1. from sqlalchemy import create_engine
    2. engine = create_engine("myengine://")
    3. Session = sessionmaker(bind=engine, autoflush=True, transactional=True)
    4. # use the new Session() freely
    5. sess = Session()
    6. sess.save(someobject)
    7. sess.flush()

    If you need to post-configure your Session, say with an engine, add it later with configure():

    1. Session.configure(bind=create_engine(...))

    All the behaviors of SessionContext and the query and __init__ methods of assignmapper are moved into the new scoped_session() function, which is compatible with both sessionmaker as well as create_session():

    1. from sqlalchemy.orm import scoped_session, sessionmaker
    2. Session = scoped_session(sessionmaker(autoflush=True, transactional=True))
    3. Session.configure(bind=engine)
    4. u = User(name="wendy")
    5. sess = Session()
    6. sess.save(u)
    7. sess.commit()
    8. # Session constructor is thread-locally scoped. Everyone gets the same
    9. # Session in the thread when scope="thread".
    10. sess2 = Session()
    11. assert sess is sess2

    When using a thread-local Session, the returned class has all of Session's interface implemented as classmethods, and “assignmapper“‘s functionality is available using the mapper classmethod. Just like the old objectstore days….

    1. # "assignmapper"-like functionality available via ScopedSession.mapper
    2. Session.mapper(User, users_table)
    3. u = User(name="wendy")
    4. Session.commit()

    Sessions are again Weak Referencing By Default

    Auto-Transactional Sessions

    As you might have noticed above, we are calling commit() on Session. The flag transactional=True means the Session is always in a transaction, commit() persists permanently.

    Auto-Flushing Sessions

    Also, autoflush=True means the Session will flush() before each query as well as when you call flush() or commit(). So now this will work:

    1. Session = sessionmaker(bind=engine, autoflush=True, transactional=True)
    2. u = User(name="wendy")
    3. sess = Session()
    4. sess.save(u)
    5. # wendy is flushed, comes right back from a query
    6. wendy = sess.query(User).filter_by(name="wendy").one()

    Transactional methods moved onto sessions

    commit() and rollback(), as well as begin() are now directly on Session. No more need to use SessionTransaction for anything (it remains in the background).

    1. Session = sessionmaker(autoflush=True, transactional=False)
    2. sess = Session()
    3. sess.begin()
    4. # use the session
    5. sess.commit() # commit transaction

    Sharing a Session with an enclosing engine-level (i.e. non-ORM) transaction is easy:

    1. Session = sessionmaker(autoflush=True, transactional=False)
    2. conn = engine.connect()
    3. trans = conn.begin()
    4. sess = Session(bind=conn)
    5. # ... session is transactional
    6. # commit the outermost transaction
    7. trans.commit()

    Nested Session Transactions with SAVEPOINT

    Available at the Engine and ORM level. ORM docs so far:

    Two-Phase Commit Sessions

    Available at the Engine and ORM level. ORM docs so far:

    Inheritance

    Polymorphic Inheritance with No Joins or Unions

    New docs for inheritance: https://www.sqlalchemy.org/docs/04 /mappers.html#advdatamapping_mapper_inheritance_joined

    Better Polymorphic Behavior with get()

    All classes within a joined-table inheritance hierarchy get an _instance_key using the base class, i.e. (BaseClass, (1, ), None). That way when you call get() a Query against the base class, it can locate subclass instances in the current identity map without querying the database.

    Types

    Custom Subclasses of sqlalchemy.types.TypeDecorator

    There is a New API for subclassing a TypeDecorator. Using the 0.3 API causes compilation errors in some cases.

    SQL Expressions

    All the “anonymous” labels and aliases use a simple <name>_<number> format now. SQL is much easier to read and is compatible with plan optimizer caches. Just check out some of the examples in the tutorials: https://www.sqlalchemy.org/docs/04/sqlexpression.html

    Generative select() Constructs

    This is definitely the way to go with select(). See htt p://www.sqlalchemy.org/docs/04/sqlexpression.html#sql_transf orm .

    New Operator System

    SQL operators and more or less every SQL keyword there is are now abstracted into the compiler layer. They now act intelligently and are type/backend aware, see:

    All type Keyword Arguments Renamed to type_

    Just like it says:

    1. b = bindparam("foo", type_=String)

    in_ Function Changed to Accept Sequence or Selectable

    The in_ function now takes a sequence of values or a selectable as its sole argument. The previous API of passing in values as positional arguments still works, but is now deprecated. This means that

    1. my_table.select(my_table.c.id.in_(1, 2, 3))
    2. my_table.select(my_table.c.id.in_(*listOfIds))

    should be changed to

    1. my_table.select(my_table.c.id.in_([1, 2, 3]))
    2. my_table.select(my_table.c.id.in_(listOfIds))

    MetaData, BoundMetaData, DynamicMetaData

    In the 0.3.x series, BoundMetaData and DynamicMetaData were deprecated in favor of MetaData and ThreadLocalMetaData. The older names have been removed in 0.4. Updating is simple:

    1. +-------------------------------------+-------------------------+
    2. |If You Had | Now Use |
    3. +=====================================+=========================+
    4. | ``MetaData`` | ``MetaData`` |
    5. +-------------------------------------+-------------------------+
    6. | ``BoundMetaData`` | ``MetaData`` |
    7. +-------------------------------------+-------------------------+
    8. | ``DynamicMetaData`` (with one | ``MetaData`` |
    9. | engine or threadlocal=False) | |
    10. +-------------------------------------+-------------------------+
    11. | ``DynamicMetaData`` | ``ThreadLocalMetaData`` |
    12. | (with different engines per thread) | |
    13. +-------------------------------------+-------------------------+

    The seldom-used name parameter to MetaData types has been removed. The ThreadLocalMetaData constructor now takes no arguments. Both types can now be bound to an Engine or a single Connection.

    You can now load table definitions and automatically create Table objects from an entire database or schema in one pass:

    1. >>> metadata = MetaData(myengine, reflect=True)
    2. >>> metadata.tables.keys()
    3. ['table_a', 'table_b', 'table_c', '...']

    MetaData also gains a .reflect() method enabling finer control over the loading process, including specification of a subset of available tables to load.

    SQL Execution

    engine, connectable, and bind_to are all now bind

    Transactions, NestedTransactions and TwoPhaseTransactions

    Connection Pool Events

    The connection pool now fires events when new DB-API connections are created, checked out and checked back into the pool. You can use these to execute session-scoped SQL setup statements on fresh connections, for example.

    Oracle Engine Fixed

    In 0.3.11, there were bugs in the Oracle Engine on how Primary Keys are handled. These bugs could cause programs that worked fine with other engines, such as sqlite, to fail when using the Oracle Engine. In 0.4, the Oracle Engine has been reworked, fixing these Primary Key problems.

    Out Parameters for Oracle

    MetaData and Session can be explicitly bound to a connection:

    1. conn = engine.connect()
    2. sess = create_session(bind=conn)

    Faster, More Foolproof Objects