This page is part of the .

    Previous: Relationship Loading Techniques | Next:

    ORM API Features for Querying

    Loader options are objects which, when passed to the Select.options() method of a object or similar SQL construct, affect the loading of both column and relationship-oriented attributes. The majority of loader options descend from the Load hierarchy. For a complete overview of using loader options, see the linked sections below.

    See also

    • - details mapper and loading options that affect how column and SQL-expression mapped attributes are loaded

    • Relationship Loading Techniques - details relationship and loading options that affect how mapped attributes are loaded

    ORM Execution Options

    ORM-level execution options are keyword options that may be associated with a statement execution using either the parameter, which is a dictionary argument accepted by Session methods such as and Session.scalars(), or by associating them directly with the statement to be invoked itself using the method, which accepts them as arbitrary keyword arguments.

    ORM-level options are distinct from the Core level execution options documented at Connection.execution_options(). It’s important to note that the ORM options discussed below are not compatible with Core level methods or Engine.execution_options(); the options are ignored at this level, even if the or Connection is associated with the in use.

    Within this section, the Executable.execution_options() method style will be illustrated for examples.

    The execution option ensures that, for all rows loaded, the corresponding instances in the Session will be fully refreshed – erasing any existing data within the objects (including pending changes) and replacing with the data loaded from the result.

    Example use looks like:

    Normally, ORM objects are only loaded once, and if they are matched up to the primary key in a subsequent result row, the row is not applied to the object. This is both to preserve pending, unflushed changes on the object as well as to avoid the overhead and complexity of refreshing data which is already there. The assumes a default working model of a highly isolated transaction, and to the degree that data is expected to change within the transaction outside of the local changes being made, those use cases would be handled using explicit steps such as this method.

    Using populate_existing, any set of objects that matches a query can be refreshed, and it also allows control over relationship loader options. E.g. to refresh an instance while also refreshing a related set of objects:

    1. stmt = (
    2. select(User)
    3. .where(User.name.in_(names))
    4. .execution_options(populate_existing=True)
    5. .options(selectinload(User.addresses))
    6. )
    7. # will refresh all matching User objects as well as the related
    8. # Address objects
    9. users = session.execute(stmt).scalars().all()

    Another use case for populate_existing is in support of various attribute loading features that can change how an attribute is loaded on a per-query basis. Options for which this apply include:

    The populate_existing execution option is equvialent to the Query.populate_existing() method in ORM queries.

    See also

    I’m re-loading data with my Session but it isn’t seeing changes that I committed elsewhere - in

    Refreshing / Expiring - in the ORM documentation

    This option, when passed as False, will cause the to not invoke the “autoflush” step. It is equivalent to using the Session.no_autoflush context manager to disable autoflush:

    1. >>> stmt = select(User).execution_options(autoflush=False)
    2. >>> session.execute(stmt)
    3. SELECT user_account.id, user_account.name, user_account.fullname
    4. FROM user_account
    5. ...

    This option will also work on ORM-enabled and Delete queries.

    The autoflush execution option is equvialent to the method in 1.x style ORM queries.

    See also

    The yield_per execution option is an integer value which will cause the to buffer only a limited number of rows and/or ORM objects at a time, before making data available to the client.

    Normally, the ORM will fetch all rows immediately, constructing ORM objects for each and assembling those objects into a single buffer, before passing this buffer to the Result object as a source of rows to be returned. The rationale for this behavior is to allow correct behavior for features such as joined eager loading, uniquifying of results, and the general case of result handling logic that relies upon the identity map maintaining a consistent state for every object in a result set as it is fetched.

    The purpose of the yield_per option is to change this behavior so that the ORM result set is optimized for iteration through very large result sets (e.g. > 10K rows), where the user has determined that the above patterns don’t apply. When yield_per is used, the ORM will instead batch ORM results into sub-collections and yield rows from each sub-collection individually as the object is iterated, so that the Python interpreter doesn’t need to declare very large areas of memory which is both time consuming and leads to excessive memory use. The option affects both the way the database cursor is used as well as how the ORM constructs rows and objects to be passed to the Result.

    Tip

    From the above, it follows that the must be consumed in an iterable fashion, that is, using iteration such as for row in result or using partial row methods such as Result.fetchmany() or . Calling Result.all() will defeat the purpose of using yield_per.

    Using yield_per is equivalent to making use of both the execution option, which selects for server side cursors to be used by the backend if supported, and the Result.yield_per() method on the returned object, which establishes a fixed size of rows to be fetched as well as a corresponding limit to how many ORM objects will be constructed at once.

    Tip

    yield_per is now available as a Core execution option as well, described in detail at Using Server Side Cursors (a.k.a. stream results). This section details the use of yield_per as an execution option with an ORM . The option behaves as similarly as possible in both contexts.

    When used with the ORM, yield_per must be established either via the Executable.execution_options() method on the given statement or by passing it to the parameter of Session.execute() or other similar method such as Session.scalars(). Typical use for fetching ORM objects is illustrated below:

    1. >>> stmt = select(User).execution_options(yield_per=10)
    2. >>> for user_obj in session.scalars(stmt):
    3. ... print(user_obj)
    4. SELECT user_account.id, user_account.name, user_account.fullname
    5. FROM user_account
    6. [...] ()
    7. User(id=1, name='spongebob', fullname='Spongebob Squarepants')
    8. User(id=2, name='sandy', fullname='Sandy Cheeks')
    9. ...
    10. >>> # ... rows continue ...

    The above code is equivalent to the example below, which uses and Connection.execution_options.max_row_buffer Core-level execution options in conjunction with the method of Result:

    1. # equivalent code
    2. >>> stmt = select(User).execution_options(stream_results=True, max_row_buffer=10)
    3. >>> for user_obj in session.scalars(stmt).yield_per(10):
    4. ... print(user_obj)
    5. SELECT user_account.id, user_account.name, user_account.fullname
    6. FROM user_account
    7. [...] ()
    8. User(id=1, name='spongebob', fullname='Spongebob Squarepants')
    9. User(id=2, name='sandy', fullname='Sandy Cheeks')
    10. ...
    11. >>> # ... rows continue ...

    yield_per is also commonly used in combination with the method, which will iterate rows in grouped partitions. The size of each partition defaults to the integer value passed to yield_per, as in the below example:

    1. >>> stmt = select(User).execution_options(yield_per=10)
    2. >>> for partition in session.scalars(stmt).partitions():
    3. ... for user_obj in partition:
    4. ... print(user_obj)
    5. SELECT user_account.id, user_account.name, user_account.fullname
    6. FROM user_account
    7. [...] ()
    8. User(id=1, name='spongebob', fullname='Spongebob Squarepants')
    9. User(id=2, name='sandy', fullname='Sandy Cheeks')
    10. ...
    11. >>> # ... rows continue ...

    The yield_per execution option is not compatible with “subquery” eager loading loading or when using collections. It is potentially compatible with “select in” eager loading , provided the database driver supports multiple, independent cursors.

    Additionally, the yield_per execution option is not compatible with the method; as this method relies upon storing a complete set of identities for all rows, it would necessarily defeat the purpose of using yield_per which is to handle an arbitrarily large number of rows.

    Changed in version 1.4.6: An exception is raised when ORM rows are fetched from a Result object that makes use of the filter, at the same time as the yield_per execution option is used.

    When using the legacy Query object with ORM use, the Query.yield_per() method will have the same result as that of the yield_per execution option.

    See also

    Deep Alchemy

    This option is an advanced-use feature mostly intended to be used with the extension. For typical cases of loading objects with identical primary keys from different “shards” or partitions, consider using individual Session objects per shard first.

    The “identity token” is an arbitrary value that can be associated within the of newly loaded objects. This element exists first and foremost to support extensions which perform per-row “sharding”, where objects may be loaded from any number of replicas of a particular database table that nonetheless have overlapping primary key values. The primary consumer of “identity token” is the Horizontal Sharding extension, which supplies a general framework for persisting objects among multiple “shards” of a particular database table.

    The identity_token execution option may be used on a per-query basis to directly affect this token. Using it directly, one can populate a with multiple instances of an object that have the same primary key and source table, but different “identities”.

    One such example is to populate a Session with objects that come from same-named tables in different schemas, using the feature which can affect the choice of schema within the scope of queries. Given a mapping as:

    1. from sqlalchemy.orm import DeclarativeBase
    2. from sqlalchemy.orm import Mapped
    3. from sqlalchemy.orm import mapped_column
    4. class Base(DeclarativeBase):
    5. pass
    6. class MyTable(Base):
    7. __tablename__ = "my_table"
    8. id: Mapped[int] = mapped_column(primary_key=True)
    9. name: Mapped[str]

    The default “schema” name for the class above is None, meaning, no schema qualification will be written into SQL statements. However, if we make use of Connection.execution_options.schema_translate_map, mapping None to an alternate schema, we can place instances of MyTable into two different schemas:

    1. engine = create_engine(
    2. "postgresql+psycopg://scott:tiger@localhost/test",
    3. )
    4. with Session(
    5. engine.execution_options(schema_translate_map={None: "test_schema"})
    6. ) as sess:
    7. sess.add(MyTable(name="this is schema one"))
    8. sess.commit()
    9. with Session(
    10. engine.execution_options(schema_translate_map={None: "test_schema_2"})
    11. ) as sess:
    12. sess.add(MyTable(name="this is schema two"))

    The above two blocks create a object linked to a different schema translate map each time, and an instance of MyTable is persisted into both test_schema.my_table as well as test_schema_2.my_table.

    The Session objects above are independent. If we wanted to persist both objects in one transaction, we would need to use the extension to do this.

    However, we can illustrate querying for these objects in one session as follows:

    1. with Session(engine) as sess:
    2. obj1 = sess.scalar(
    3. .where(MyTable.id == 1)
    4. .execution_options(
    5. schema_translate_map={None: "test_schema"},
    6. identity_token="test_schema",
    7. )
    8. )
    9. obj2 = sess.scalar(
    10. select(MyTable)
    11. .where(MyTable.id == 1)
    12. .execution_options(
    13. schema_translate_map={None: "test_schema_2"},
    14. identity_token="test_schema_2",
    15. )
    16. )

    Both obj1 and obj2 are distinct from each other. However, they both refer to primary key id 1 for the MyTable class, yet are distinct. This is how the identity_token comes into play, which we can see in the inspection of each object, where we look at InstanceState.key to view the two distinct identity tokens:

    1. >>> from sqlalchemy import inspect
    2. >>> inspect(obj1).key
    3. (<class '__main__.MyTable'>, (1,), 'test_schema')
    4. >>> inspect(obj2).key
    5. (<class '__main__.MyTable'>, (1,), 'test_schema_2')

    The above logic takes place automatically when using the Horizontal Sharding extension.

    New in version 2.0.0rc1: - added the identity_token ORM level execution option.

    See also

    - in the ORM Examples section. See the script separate_schema_translates.py for a demonstration of the above use case using the full sharding API.

    Inspecting entities and columns from ORM-enabled SELECT and DML statements

    The select() construct, as well as the , update() and constructs (for the latter DML constructs, as of SQLAlchemy 1.4.33), all support the ability to inspect the entities in which these statements are created against, as well as the columns and datatypes that would be returned in a result set.

    For a Select object, this information is available from the attribute. This attribute operates in the same way as the legacy Query.column_descriptions attribute. The format returned is a list of dictionaries:

    1. >>> from pprint import pprint
    2. >>> user_alias = aliased(User, name="user2")
    3. >>> stmt = select(User, User.id, user_alias)
    4. >>> pprint(stmt.column_descriptions)
    5. [{'aliased': False,
    6. 'entity': <class 'User'>,
    7. 'expr': <class 'User'>,
    8. 'name': 'User',
    9. 'type': <class 'User'>},
    10. {'aliased': False,
    11. 'entity': <class 'User'>,
    12. 'expr': <....InstrumentedAttribute object at ...>,
    13. 'name': 'id',
    14. 'type': Integer()},
    15. {'aliased': True,
    16. 'entity': <AliasedClass ...; User>,
    17. 'expr': <AliasedClass ...; User>,
    18. 'name': 'user2',
    19. 'type': <class 'User'>}]

    When is used with non-ORM objects such as plain Table or objects, the entries will contain basic information about individual columns returned in all cases:

    Changed in version 1.4.33: The Select.column_descriptions attribute now returns a value when used against a that is not ORM-enabled. Previously, this would raise NotImplementedError.

    For insert(), and delete() constructs, there are two separate attributes. One is which returns information about the primary ORM entity and database table which the DML construct would be affecting:

    1. >>> from sqlalchemy import update
    2. >>> stmt = update(User).values(name="somename").returning(User.id)
    3. >>> pprint(stmt.entity_description)
    4. {'entity': <class 'User'>,
    5. 'expr': <class 'User'>,
    6. 'name': 'User',
    7. 'table': Table('user_account', ...),
    8. 'type': <class 'User'>}

    Tip

    The UpdateBase.entity_description includes an entry "table" which is actually the table to be inserted, updated or deleted by the statement, which is not always the same as the SQL “selectable” to which the class may be mapped. For example, in a joined-table inheritance scenario, "table" will refer to the local table for the given entity.

    The other is which delivers information about the columns present in the RETURNING collection in a manner roughly similar to that of Select.column_descriptions:

    1. >>> pprint(stmt.returning_column_descriptions)
    2. [{'aliased': False,
    3. 'entity': <class 'User'>,
    4. 'expr': <sqlalchemy.orm.attributes.InstrumentedAttribute ...>,
    5. 'name': 'id',
    6. 'type': Integer()}]

    New in version 1.4.33: Added the and UpdateBase.returning_column_descriptions attributes.

    Additional ORM API Constructs

    function sqlalchemy.orm.aliased(element: Union[_EntityType[_O], FromClause], alias: Optional[Union[, Subquery]] = None, name: Optional[str] = None, flat: bool = False, adapt_on_names: bool = False) → Union[[_O], FromClause, AliasedType[_O]]

    Produce an alias of the given element, usually an instance.

    E.g.:

    1. my_alias = aliased(MyClass)
    2. stmt = select(MyClass, my_alias).filter(MyClass.id > my_alias.id)
    3. result = session.execute(stmt)

    The aliased() function is used to create an ad-hoc mapping of a mapped class to a new selectable. By default, a selectable is generated from the normally mapped selectable (typically a ) using the FromClause.alias() method. However, can also be used to link the class to a new select() statement. Also, the function is a variant of aliased() that is intended to specify a so-called “polymorphic selectable”, that corresponds to the union of several joined-inheritance subclasses at once.

    For convenience, the function also accepts plain FromClause constructs, such as a or select() construct. In those cases, the method is called on the object and the new Alias object returned. The returned is not ORM-mapped in this case.

    See also

    ORM Entity Aliases - in the

    Selecting ORM Aliases - in the

    • Parameters:

      • element – element to be aliased. Is normally a mapped class, but for convenience can also be a FromClause element.

      • alias – Optional selectable unit to map the element to. This is usually used to link the object to a subquery, and should be an aliased select construct as one would produce from the method or the Select.subquery() or methods of the select() construct.

      • name – optional string name to use for the alias, if not specified by the alias parameter. The name, among other things, forms the attribute name that will be accessible via tuples returned by a object. Not supported when creating aliases of Join objects.

      • flat – Boolean, will be passed through to the call so that aliases of Join objects will alias the individual tables inside the join, rather than creating a subquery. This is generally supported by all modern databases with regards to right-nested joins and generally produces more efficient queries.

      • adapt_on_names

        if True, more liberal “matching” will be used when mapping the mapped columns of the ORM entity to those of the given selectable - a name-based match will be performed if the given selectable doesn’t otherwise have a column that corresponds to one on the entity. The use case for this is when associating an entity with some derived selectable such as one that uses aggregate functions:

        1. class UnitPrice(Base):
        2. __tablename__ = 'unit_price'
        3. ...
        4. unit_id = Column(Integer)
        5. price = Column(Numeric)
        6. aggregated_unit_price = Session.query(
        7. func.sum(UnitPrice.price).label('price')
        8. ).group_by(UnitPrice.unit_id).subquery()
        9. aggregated_unit_price = aliased(UnitPrice,
        10. alias=aggregated_unit_price, adapt_on_names=True)

        Above, functions on aggregated_unit_price which refer to .price will return the func.sum(UnitPrice.price).label('price') column, as it is matched on the name “price”. Ordinarily, the “price” function wouldn’t have any “column correspondence” to the actual UnitPrice.price column as it is not a proxy of the original.

    class sqlalchemy.orm.util.AliasedClass

    Represents an “aliased” form of a mapped class for usage with Query.

    The ORM equivalent of a construct, this object mimics the mapped class using a __getattr__ scheme and maintains a reference to a real Alias object.

    A primary purpose of is to serve as an alternate within a SQL statement generated by the ORM, such that an existing mapped entity can be used in multiple contexts. A simple example:

    1. # find all pairs of users with the same name
    2. user_alias = aliased(User)
    3. session.query(User, user_alias).\
    4. join((user_alias, User.id > user_alias.id)).\
    5. filter(User.name == user_alias.name)

    AliasedClass is also capable of mapping an existing mapped class to an entirely new selectable, provided this selectable is column- compatible with the existing mapped selectable, and it can also be configured in a mapping as the target of a . See the links below for examples.

    The AliasedClass object is constructed typically using the function. It also is produced with additional configuration when using the with_polymorphic() function.

    The resulting object is an instance of . This object implements an attribute scheme which produces the same attribute and method interface as the original mapped class, allowing AliasedClass to be compatible with any attribute technique which works on the original class, including hybrid attributes (see ).

    The AliasedClass can be inspected for its underlying , aliased selectable, and other information using inspect():

    1. from sqlalchemy import inspect
    2. my_alias = aliased(MyClass)
    3. insp = inspect(my_alias)

    The resulting inspection object is an instance of .

    See also

    aliased()

    Relationship to Aliased Class

    Class signature

    class sqlalchemy.orm.AliasedClass (, sqlalchemy.orm.ORMColumnsClauseRole)

    class sqlalchemy.orm.util.AliasedInsp

    Provide an inspection interface for an AliasedClass object.

    The object is returned given an AliasedClass using the function:

    1. from sqlalchemy import inspect
    2. from sqlalchemy.orm import aliased
    3. my_alias = aliased(MyMappedClass)
    4. insp = inspect(my_alias)

    Attributes on AliasedInsp include:

    • entity - the represented.

    • mapper - the Mapper mapping the underlying class.

    • selectable - the construct which ultimately represents an aliased Table or construct.

    • name - the name of the alias. Also is used as the attribute name when returned in a result tuple from Query.

    • polymorphic_on - an alternate column or SQL expression which will be used as the “discriminator” for a polymorphic load.

    See also

    Class signature

    class sqlalchemy.orm.AliasedInsp (sqlalchemy.orm.ORMEntityColumnsClauseRole, sqlalchemy.orm.ORMFromClauseRole, sqlalchemy.sql.cache_key.HasCacheKey, , sqlalchemy.util.langhelpers.MemoizedSlots, sqlalchemy.inspection.Inspectable, typing.Generic)

    class sqlalchemy.orm.Bundle

    A grouping of SQL expressions that are returned by a Query under one namespace.

    The essentially allows nesting of the tuple-based results returned by a column-oriented Query object. It also is extensible via simple subclassing, where the primary capability to override is that of how the set of expressions should be returned, allowing post-processing as well as custom return types, without involving ORM identity-mapped classes.

    See also

    Members

    __init__(), , columns, , is_aliased_class, , is_clause_element, , label(),

    Class signature

    class sqlalchemy.orm.Bundle (sqlalchemy.orm.ORMColumnsClauseRole, sqlalchemy.sql.annotation.SupportsCloneAnnotations, sqlalchemy.sql.cache_key.MemoizedHasCacheKey, sqlalchemy.inspection.Inspectable, )

    • method sqlalchemy.orm.Bundle.__init__(name: str, *exprs: _ColumnExpressionArgument[Any], **kw: Any)

      Construct a new .

      e.g.:

      1. bn = Bundle("mybundle", MyClass.x, MyClass.y)
      2. for row in session.query(bn).filter(
      3. bn.c.x == 5).filter(bn.c.y == 4):
      4. print(row.mybundle.x, row.mybundle.y)
      • Parameters:

        • name – name of the bundle.

        • *exprs – columns or SQL expressions comprising the bundle.

        • single_entity=False – if True, rows for this Bundle can be returned as a “single entity” outside of any enclosing tuple in the same manner as a mapped entity.

    • attribute c: ReadOnlyColumnCollection[str, KeyedColumnElement[Any]]

      An alias for Bundle.columns.

    • attribute columns: ReadOnlyColumnCollection[str, KeyedColumnElement[Any]]

      A namespace of SQL expressions referred to by this Bundle.

    • method sqlalchemy.orm.Bundle.create_row_processor(query: [Any], procs: Sequence[Callable[[[Any]], Any]], labels: Sequence[str]) → Callable[[[Any]], Any]

      Produce the “row processing” function for this Bundle.

      May be overridden by subclasses to provide custom behaviors when results are fetched. The method is passed the statement object and a set of “row processor” functions at query execution time; these processor functions when given a result row will return the individual attribute value, which can then be adapted into any kind of return data structure.

      The example below illustrates replacing the usual return structure with a straight Python dictionary:

      A result from the above Bundle will return dictionary values:

      1. bn = DictBundle('mybundle', MyClass.data1, MyClass.data2)
      2. for row in session.execute(select(bn)).where(bn.c.data1 == 'd1'):
      3. print(row.mybundle['data1'], row.mybundle['data2'])
    • attribute is_aliased_class = False

      True if this object is an instance of AliasedClass.

    • attribute is_bundle = True

      True if this object is an instance of Bundle.

    • attribute is_clause_element = False

      True if this object is an instance of ClauseElement.

    • attribute is_mapper = False

      True if this object is an instance of Mapper.

    • method label(name)

      Provide a copy of this Bundle passing a new label.

    • attribute single_entity = False

      If True, queries for a single Bundle will be returned as a single entity, rather than an element within a keyed tuple.

    function sqlalchemy.orm.with_loader_criteria(entity_or_base: _EntityType[Any], where_criteria: _ColumnExpressionArgument[bool], loader_only: bool = False, include_aliases: bool = False, propagate_to_loaders: bool = True, track_closure_variables: bool = True) → LoaderCriteriaOption

    Add additional WHERE criteria to the load for all occurrences of a particular entity.

    New in version 1.4.

    The with_loader_criteria() option is intended to add limiting criteria to a particular kind of entity in a query, globally, meaning it will apply to the entity as it appears in the SELECT query as well as within any subqueries, join conditions, and relationship loads, including both eager and lazy loaders, without the need for it to be specified in any particular part of the query. The rendering logic uses the same system used by single table inheritance to ensure a certain discriminator is applied to a table.

    E.g., using queries, we can limit the way the User.addresses collection is loaded, regardless of the kind of loading used:

    1. from sqlalchemy.orm import with_loader_criteria
    2. stmt = select(User).options(
    3. selectinload(User.addresses),
    4. with_loader_criteria(Address, Address.email_address != 'foo'))
    5. )

    Above, the “selectinload” for User.addresses will apply the given filtering criteria to the WHERE clause.

    Another example, where the filtering will be applied to the ON clause of the join, in this example using 1.x style queries:

    1. q = session.query(User).outerjoin(User.addresses).options(
    2. with_loader_criteria(Address, Address.email_address != 'foo'))
    3. )

    The primary purpose of is to use it in the SessionEvents.do_orm_execute() event handler to ensure that all occurrences of a particular entity are filtered in a certain way, such as filtering for access control roles. It also can be used to apply criteria to relationship loads. In the example below, we can apply a certain set of rules to all queries emitted by a particular :

    1. session = Session(bind=engine)
    2. @event.listens_for("do_orm_execute", session)
    3. def _add_filtering_criteria(execute_state):
    4. if (
    5. execute_state.is_select
    6. and not execute_state.is_column_load
    7. and not execute_state.is_relationship_load
    8. ):
    9. execute_state.statement = execute_state.statement.options(
    10. with_loader_criteria(
    11. SecurityRole,
    12. lambda cls: cls.role.in_(['some_role']),
    13. include_aliases=True
    14. )
    15. )

    In the above example, the SessionEvents.do_orm_execute() event will intercept all queries emitted using the . For those queries which are SELECT statements and are not attribute or relationship loads a custom with_loader_criteria() option is added to the query. The option will be used in the given statement and will also be automatically propagated to all relationship loads that descend from this query.

    The criteria argument given is a lambda that accepts a cls argument. The given class will expand to include all mapped subclass and need not itself be a mapped class.

    Tip

    When using with_loader_criteria() option in conjunction with the loader option, it’s important to note that with_loader_criteria() only affects the part of the query that determines what SQL is rendered in terms of the WHERE and FROM clauses. The option does not affect the rendering of the SELECT statement outside of the columns clause, so does not have any interaction with the with_loader_criteria() option. However, the way things “work” is that is meant to be used with a query that is already selecting from the additional entities in some way, where with_loader_criteria() can apply it’s additional criteria.

    In the example below, assuming a mapping relationship as A -> A.bs -> B, the given option will affect the way in which the JOIN is rendered:

    1. stmt = select(A).join(A.bs).options(
    2. contains_eager(A.bs),
    3. with_loader_criteria(B, B.flag == 1)
    4. )

    Above, the given with_loader_criteria() option will affect the ON clause of the JOIN that is specified by .join(A.bs), so is applied as expected. The option has the effect that columns from B are added to the columns clause:

    1. SELECT
    2. b.id, b.a_id, b.data, b.flag,
    3. a.id AS id_1,
    4. a.data AS data_1
    5. FROM a JOIN b ON a.id = b.a_id AND b.flag = :flag_1

    The use of the contains_eager() option within the above statement has no effect on the behavior of the option. If the contains_eager() option were omitted, the SQL would be the same as regards the FROM and WHERE clauses, where continues to add its criteria to the ON clause of the JOIN. The addition of contains_eager() only affects the columns clause, in that additional columns against b are added which are then consumed by the ORM to produce B instances.

    Warning

    The use of a lambda inside of the call to is only invoked once per unique class. Custom functions should not be invoked within this lambda. See Using Lambdas to add significant speed gains to statement production for an overview of the “lambda SQL” feature, which is for advanced use only.

    • Parameters:

      • entity_or_base – a mapped class, or a class that is a super class of a particular set of mapped classes, to which the rule will apply.

      • where_criteria

        a Core SQL expression that applies limiting criteria. This may also be a “lambda:” or Python function that accepts a target class as an argument, when the given class is a base with many different mapped subclasses.

        Note

        To support pickling, use a module-level Python function to produce the SQL expression instead of a lambda or a fixed SQL expression, which tend to not be picklable.

      • include_aliases – if True, apply the rule to constructs as well.

      • propagate_to_loaders

        defaults to True, apply to relationship loaders such as lazy loaders. This indicates that the option object itself including SQL expression is carried along with each loaded instance. Set to False to prevent the object from being assigned to individual instances.

        See also

        ORM Query Events - includes examples of using .

        Adding global WHERE / ON criteria - basic example on how to combine with the SessionEvents.do_orm_execute() event.

      • track_closure_variables

        when False, closure variables inside of a lambda expression will not be used as part of any cache key. This allows more complex expressions to be used inside of a lambda expression but requires that the lambda ensures it returns the identical SQL every time given a particular class.

        New in version 1.4.0b2.

    function sqlalchemy.orm.join(left: _FromClauseArgument, right: _FromClauseArgument, onclause: Optional[_OnClauseArgument] = None, isouter: bool = False, full: bool = False) → _ORMJoin

    Produce an inner join between left and right clauses.

    is an extension to the core join interface provided by join(), where the left and right selectables may be not only core selectable objects such as , but also mapped classes or AliasedClass instances. The “on” clause can be a SQL expression or an ORM mapped attribute referencing a configured .

    join() is not commonly needed in modern usage, as its functionality is encapsulated within that of the and Query.join() methods. which feature a significant amount of automation beyond by itself. Explicit use of join() with ORM-enabled SELECT statements involves use of the method, as in:

    1. from sqlalchemy.orm import join
    2. stmt = select(User).\
    3. select_from(join(User, Address, User.addresses)).\
    4. filter(Address.email_address=='foo@bar.com')

    In modern SQLAlchemy the above join can be written more succinctly as:

    1. stmt = select(User).\
    2. join(User.addresses).\
    3. filter(Address.email_address=='foo@bar.com')

    Warning

    using join() directly may not work properly with modern ORM options such as . It is strongly recommended to use the idiomatic join patterns provided by methods such as Select.join() and when creating ORM joins.

    See also

    Joins - in the for background on idiomatic ORM join patterns

    function sqlalchemy.orm.outerjoin(left: _FromClauseArgument, right: _FromClauseArgument, onclause: Optional[_OnClauseArgument] = None, full: bool = False) → _ORMJoin

    Produce a left outer join between left and right clauses.

    This is the “outer join” version of the join() function, featuring the same behavior except that an OUTER JOIN is generated. See that function’s documentation for other usage details.

    function sqlalchemy.orm.with_parent(instance: object, prop: [Any], from_entity: Optional[_EntityType[Any]] = None) → ColumnElement[bool]

    Create filtering criterion that relates this query’s primary entity to the given related instance, using established configuration.

    E.g.:

    1. stmt = select(Address).where(with_parent(some_user, User.addresses))

    The SQL rendered is the same as that rendered when a lazy loader would fire off from the given parent on that attribute, meaning that the appropriate state is taken from the parent object in Python without the need to render joins to the parent table in the rendered statement.

    The given property may also make use of PropComparator.of_type() to indicate the left side of the criteria:

    1. a1 = aliased(Address)
    2. a2 = aliased(Address)
    3. stmt = select(a1, a2).where(
    4. with_parent(u1, User.addresses.of_type(a2))
    5. )

    The above use is equivalent to using the from_entity() argument:

    • Parameters:

      • instance – An instance which has some .

      • property – Class-bound attribute, which indicates what relationship from the instance should be used to reconcile the parent/child relationship.

      • from_entity

        Entity in which to consider as the left side. This defaults to the “zero” entity of the Query itself.

        New in version 1.2.

    ORM Querying Guide