Collection Customization and API Details
Mapping a one-to-many or many-to-many relationship results in a collection of values accessible through an attribute on the parent instance. The two common collection types for these are and set
, which in mappings that use Mapped is established by using the collection type within the container, as demonstrated in the Parent.children
collection below where list
is used:
Or for a set
, illustrated in the same Parent.children
collection:
from sqlalchemy import ForeignKey
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
class Base(DeclarativeBase):
pass
class Parent(Base):
__tablename__ = "parent"
parent_id: Mapped[int] = mapped_column(primary_key=True)
# use a set
children: Mapped[set["Child"]] = relationship()
class Child(Base):
__tablename__ = "child"
child_id: Mapped[int] = mapped_column(primary_key=True)
parent_id: Mapped[int] = mapped_column(ForeignKey("parent.id"))
Note
If using Python 3.7 or 3.8, annotations for collections need to use typing.List
or typing.Set
, e.g. Mapped[List["Child"]]
or Mapped[Set["Child"]]
; the list
and set
Python built-ins don’t yet support generic annotation in these Python versions, such as:
from typing import List
class Parent(Base):
__tablename__ = "parent"
parent_id: Mapped[int] = mapped_column(primary_key=True)
# use a List, Python 3.8 and earlier
children: Mapped[List["Child"]] = relationship()
When using mappings without the Mapped annotation, such as when using or untyped Python code, as well as in a few special cases, the collection class for a relationship() can always be specified directly using the parameter:
# non-annotated mapping
class Parent(Base):
__tablename__ = "parent"
parent_id = mapped_column(Integer, primary_key=True)
children = relationship("Child", collection_class=set)
class Child(Base):
__tablename__ = "child"
child_id = mapped_column(Integer, primary_key=True)
parent_id = mapped_column(ForeignKey("parent.id"))
In the absence of relationship.collection_class or , the default collection type is list
.
Beyond list
and set
builtins, there is also support for two varities of dictionary, described below at Dictionary Collections. There is also support for any arbitrary mutable sequence type can be set up as the target collection, with some additional configuration steps; this is described in the section .
A little extra detail is needed when using a dictionary as a collection. This because objects are always loaded from the database as lists, and a key-generation strategy must be available to populate the dictionary correctly. The function is by far the most common way to achieve a simple dictionary collection. It produces a dictionary class that will apply a particular attribute of the mapped class as a key. Below we map an Item
class containing a dictionary of Note
items keyed to the Note.keyword
attribute. When using attribute_keyed_dict(), the annotation may be typed using the KeyFuncDict or just plain dict
as illustrated in the following example. However, the parameter is required in this case so that the attribute_keyed_dict() may be appropriately parametrized:
from typing import Optional
from sqlalchemy import ForeignKey
from sqlalchemy.orm import attribute_keyed_dict
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
class Base(DeclarativeBase):
pass
class Item(Base):
__tablename__ = "item"
id: Mapped[int] = mapped_column(primary_key=True)
notes: Mapped[dict[str, "Note"]] = relationship(
collection_class=attribute_keyed_dict("keyword"),
cascade="all, delete-orphan",
)
class Note(Base):
__tablename__ = "note"
id: Mapped[int] = mapped_column(primary_key=True)
item_id: Mapped[int] = mapped_column(ForeignKey("item.id"))
keyword: Mapped[str]
text: Mapped[Optional[str]]
def __init__(self, keyword: str, text: str):
self.keyword = keyword
self.text = text
Item.notes
is then a dictionary:
>>> item = Item()
>>> item.notes["a"] = Note("a", "atext")
>>> item.notes.items()
{'a': <__main__.Note object at 0x2eaaf0>}
will ensure that the .keyword
attribute of each Note
complies with the key in the dictionary. Such as, when assigning to Item.notes
, the dictionary key we supply must match that of the actual Note
object:
item = Item()
item.notes = {
"a": Note("a", "atext"),
"b": Note("b", "btext"),
}
The attribute which attribute_keyed_dict() uses as a key does not need to be mapped at all! Using a regular Python @property
allows virtually any detail or combination of details about the object to be used as the key, as below when we establish it as a tuple of Note.keyword
and the first ten letters of the Note.text
field:
class Item(Base):
__tablename__ = "item"
id: Mapped[int] = mapped_column(primary_key=True)
notes: Mapped[dict[str, "Note"]] = relationship(
collection_class=attribute_keyed_dict("note_key"),
back_populates="item",
cascade="all, delete-orphan",
)
class Note(Base):
__tablename__ = "note"
id: Mapped[int] = mapped_column(primary_key=True)
item_id: Mapped[int] = mapped_column(ForeignKey("item.id"))
keyword: Mapped[str]
text: Mapped[str]
item: Mapped["Item"] = relationship()
@property
return (self.keyword, self.text[0:10])
self.keyword = keyword
self.text = text
Above we added a Note.item
relationship, with a bi-directional configuration. Assigning to this reverse relationship, the Note
is added to the Item.notes
dictionary and the key is generated for us automatically:
>>> item = Item()
>>> n1 = Note("a", "atext")
>>> n1.item = item
>>> item.notes
{('a', 'atext'): <__main__.Note object at 0x2eaaf0>}
Other built-in dictionary types include column_keyed_dict(), which is almost like except given the Column object directly:
from sqlalchemy.orm import column_keyed_dict
class Item(Base):
__tablename__ = "item"
id: Mapped[int] = mapped_column(primary_key=True)
notes: Mapped[dict[str, "Note"]] = relationship(
collection_class=column_keyed_dict(Note.__table__.c.keyword),
cascade="all, delete-orphan",
)
as well as mapped_collection()
which is passed any callable function. Note that it’s usually easier to use along with a @property
as mentioned earlier:
from sqlalchemy.orm import mapped_collection
class Item(Base):
__tablename__ = "item"
id: Mapped[int] = mapped_column(primary_key=True)
notes: Mapped[dict[str, "Note"]] = relationship(
collection_class=mapped_collection(lambda note: note.text[0:10]),
cascade="all, delete-orphan",
)
Dictionary mappings are often combined with the “Association Proxy” extension to produce streamlined dictionary views. See Proxying to Dictionary Based Collections and for examples.
Dealing with Key Mutations and back-populating for Dictionary collections
When using , the “key” for the dictionary is taken from an attribute on the target object. Changes to this key are not tracked. This means that the key must be assigned towards when it is first used, and if the key changes, the collection will not be mutated. A typical example where this might be an issue is when relying upon backrefs to populate an attribute mapped collection. Given the following:
class A(Base):
__tablename__ = "a"
id: Mapped[int] = mapped_column(primary_key=True)
bs: Mapped[dict[str, "B"]] = relationship(
collection_class=attribute_keyed_dict("data"),
back_populates="a",
)
class B(Base):
__tablename__ = "b"
id: Mapped[int] = mapped_column(primary_key=True)
a_id: Mapped[int] = mapped_column(ForeignKey("a.id"))
data: Mapped[str]
a: Mapped["A"] = relationship(back_populates="bs")
Above, if we create a B()
that refers to a specific A()
, the back populates will then add the B()
to the A.bs
collection, however if the value of B.data
is not set yet, the key will be None
:
Setting b1.data
after the fact does not update the collection:
>>> b1.data = "the key"
>>> a1.bs
{None: <test3.B object at 0x7f7b1023ef70>}
This can also be seen if one attempts to set up B()
in the constructor. The order of arguments changes the result:
>>> B(a=a1, data="the key")
<test3.B object at 0x7f7b10114280>
>>> a1.bs
{None: <test3.B object at 0x7f7b10114280>}
vs:
>>> B(data="the key", a=a1)
<test3.B object at 0x7f7b10114340>
>>> a1.bs
{'the key': <test3.B object at 0x7f7b10114340>}
If backrefs are being used in this way, ensure that attributes are populated in the correct order using an __init__
method.
An event handler such as the following may also be used to track changes in the collection as well:
from sqlalchemy import event
from sqlalchemy.orm import attributes
@event.listens_for(B.data, "set")
def set_item(obj, value, previous, initiator):
if obj.a is not None:
previous = None if previous == attributes.NO_VALUE else previous
obj.a.bs[value] = obj
obj.a.bs.pop(previous)
You can use your own types for collections as well. In simple cases, inheriting from list
or set
, adding custom behavior, is all that’s needed. In other cases, special decorators are needed to tell SQLAlchemy more detail about how the collection operates.
Do I need a custom collection implementation?
In most cases not at all! The most common use cases for a “custom” collection is one that validates or marshals incoming values into a new form, such as a string that becomes a class instance, or one which goes a step beyond and represents the data internally in some fashion, presenting a “view” of that data on the outside of a different form.
For the first use case, the decorator is by far the simplest way to intercept incoming values in all cases for the purposes of validation and simple marshaling. See Simple Validators for an example of this.
For the second use case, the extension is a well-tested, widely used system that provides a read/write “view” of a collection in terms of some attribute present on the target object. As the target attribute can be a @property
that returns virtually anything, a wide array of “alternative” views of a collection can be constructed with just a few functions. This approach leaves the underlying mapped collection unaffected and avoids the need to carefully tailor collection behavior on a method-by-method basis.
Customized collections are useful when the collection needs to have special behaviors upon access or mutation operations that can’t otherwise be modeled externally to the collection. They can of course be combined with the above two approaches.
Collections in SQLAlchemy are transparently instrumented. Instrumentation means that normal operations on the collection are tracked and result in changes being written to the database at flush time. Additionally, collection operations can fire events which indicate some secondary operation must take place. Examples of a secondary operation include saving the child item in the parent’s Session (i.e. the save-update
cascade), as well as synchronizing the state of a bi-directional relationship (i.e. a ).
The collections package understands the basic interface of lists, sets and dicts and will automatically apply instrumentation to those built-in types and their subclasses. Object-derived types that implement a basic collection interface are detected and instrumented via duck-typing:
class ListLike:
def __init__(self):
self.data = []
def append(self, item):
self.data.append(item)
def remove(self, item):
self.data.remove(item)
def extend(self, items):
self.data.extend(items)
def __iter__(self):
return iter(self.data)
def foo(self):
return "foo"
append
, remove
, and extend
are known members of list
, and will be instrumented automatically. __iter__
is not a mutator method and won’t be instrumented, and foo
won’t be either.
Duck-typing (i.e. guesswork) isn’t rock-solid, of course, so you can be explicit about the interface you are implementing by providing an __emulates__
class attribute:
class SetLike:
__emulates__ = set
def __init__(self):
self.data = set()
def append(self, item):
self.data.add(item)
def remove(self, item):
self.data.remove(item)
def __iter__(self):
return iter(self.data)
This class looks similar to a Python list
(i.e. “list-like”) as it has an append
method, but the __emulates__
attribute forces it to be treated as a set
. remove
is known to be part of the set interface and will be instrumented.
But this class won’t work quite yet: a little glue is needed to adapt it for use by SQLAlchemy. The ORM needs to know which methods to use to append, remove and iterate over members of the collection. When using a type like list
or set
, the appropriate methods are well-known and used automatically when present. However the class above, which only roughly resembles a set
, does not provide the expected add
method, so we must indicate to the ORM the method that will instead take the place of the add
method, in this case using a decorator @collection.appender
; this is illustrated in the next section.
Decorators can be used to tag the individual methods the ORM needs to manage collections. Use them when your class doesn’t quite meet the regular interface for its container type, or when you otherwise would like to use a different method to get the job done.
from sqlalchemy.orm.collections import collection
class SetLike:
__emulates__ = set
def __init__(self):
self.data = set()
@collection.appender
self.data.add(item)
def remove(self, item):
self.data.remove(item)
def __iter__(self):
return iter(self.data)
And that’s all that’s needed to complete the example. SQLAlchemy will add instances via the append
method. remove
and __iter__
are the default methods for sets and will be used for removing and iteration. Default methods can be changed as well:
from sqlalchemy.orm.collections import collection
class MyList(list):
@collection.remover
# do something special...
...
@collection.iterator
def hey_use_this_instead_for_iteration(self):
...
There is no requirement to be “list-like” or “set-like” at all. Collection classes can be any shape, so long as they have the append, remove and iterate interface marked for SQLAlchemy’s use. Append and remove methods will be called with a mapped entity as the single argument, and iterator methods are called with no arguments and must return an iterator.
The KeyFuncDict class can be used as a base class for your custom types or as a mix-in to quickly add dict
collection support to other classes. It uses a keying function to delegate to __setitem__
and __delitem__
:
from sqlalchemy.orm.collections import KeyFuncDict
class MyNodeMap(KeyFuncDict):
"""Holds 'Node' objects, keyed by the 'name' attribute."""
def __init__(self, *args, **kw):
super().__init__(keyfunc=lambda node: node.name)
dict.__init__(self, *args, **kw)
When subclassing , user-defined versions of __setitem__()
or __delitem__()
should be decorated with collection.internally_instrumented(), if they call down to those same methods on . This because the methods on KeyFuncDict are already instrumented - calling them from within an already instrumented call can cause events to be fired off repeatedly, or inappropriately, leading to internal state corruption in rare cases:
from sqlalchemy.orm.collections import KeyFuncDict, collection
class MyKeyFuncDict(KeyFuncDict):
"""Use @internally_instrumented when your methods
call down to already-instrumented methods.
"""
@collection.internally_instrumented
def __setitem__(self, key, value, _sa_initiator=None):
# do something with key, value
super(MyKeyFuncDict, self).__setitem__(key, value, _sa_initiator)
@collection.internally_instrumented
def __delitem__(self, key, _sa_initiator=None):
# do something with key
super(MyKeyFuncDict, self).__delitem__(key, _sa_initiator)
The ORM understands the dict
interface just like lists and sets, and will automatically instrument all “dict-like” methods if you choose to subclass dict
or provide dict-like collection behavior in a duck-typed class. You must decorate appender and remover methods, however- there are no compatible methods in the basic dictionary interface for SQLAlchemy to use by default. Iteration will go through values()
unless otherwise decorated.
Many custom types and existing library classes can be used as a entity collection type as-is without further ado. However, it is important to note that the instrumentation process will modify the type, adding decorators around methods automatically.
The decorations are lightweight and no-op outside of relationships, but they do add unneeded overhead when triggered elsewhere. When using a library class as a collection, it can be good practice to use the “trivial subclass” trick to restrict the decorations to just your usage in relationships. For example:
class MyAwesomeList(some.great.library.AwesomeList):
pass
# ... relationship(..., collection_class=MyAwesomeList)
The ORM uses this approach for built-ins, quietly substituting a trivial subclass when a list
, set
or dict
is used directly.
function sqlalchemy.orm.attribute_keyed_dict(attr_name: str, *, ignore_unpopulated_attribute: bool = False) → Type[[_KT, _KT]]
A dictionary-based collection type with attribute-based keying.
Changed in version 2.0: Renamed attribute_mapped_collection to .
Returns a KeyFuncDict factory which will produce new dictionary keys based on the value of a particular named attribute on ORM mapped instances to be added to the dictionary.
Note
the value of the target attribute must be assigned with its value at the time that the object is being added to the dictionary collection. Additionally, changes to the key attribute are not tracked, which means the key in the dictionary is not automatically synchronized with the key value on the target object itself. See for further details.
See also
Dictionary Collections - background on use
Parameters:
attr_name – string name of an ORM-mapped attribute on the mapped class, the value of which on a particular instance is to be used as the key for a new dictionary entry for that instance.
ignore_unpopulated_attribute –
if True, and the target attribute on an object is not populated at all, the operation will be silently skipped. By default, an error is raised.
New in version 2.0: an error is raised by default if the attribute being used for the dictionary key is determined that it was never populated with any value. The parameter may be set which will instead indicate that this condition should be ignored, and the append operation silently skipped. This is in contrast to the behavior of the 1.x series which would erroneously populate the value in the dictionary with an arbitrary key value of
None
.
function sqlalchemy.orm.column_keyed_dict(mapping_spec: Union[Type[_KT], Callable[[_KT], _VT]], *, ignore_unpopulated_attribute: bool = False) → Type[KeyFuncDict[_KT, _KT]]
A dictionary-based collection type with column-based keying.
Changed in version 2.0: Renamed to column_keyed_dict
.
Returns a KeyFuncDict factory which will produce new dictionary keys based on the value of a particular -mapped attribute on ORM mapped instances to be added to the dictionary.
Note
the value of the target attribute must be assigned with its value at the time that the object is being added to the dictionary collection. Additionally, changes to the key attribute are not tracked, which means the key in the dictionary is not automatically synchronized with the key value on the target object itself. See Dealing with Key Mutations and back-populating for Dictionary collections for further details.
See also
- background on use
Parameters:
mapping_spec – a Column object that is expected to be mapped by the target mapper to a particular attribute on the mapped class, the value of which on a particular instance is to be used as the key for a new dictionary entry for that instance.
ignore_unpopulated_attribute –
if True, and the mapped attribute indicated by the given target attribute on an object is not populated at all, the operation will be silently skipped. By default, an error is raised.
New in version 2.0: an error is raised by default if the attribute being used for the dictionary key is determined that it was never populated with any value. The column_keyed_dict.ignore_unpopulated_attribute parameter may be set which will instead indicate that this condition should be ignored, and the append operation silently skipped. This is in contrast to the behavior of the 1.x series which would erroneously populate the value in the dictionary with an arbitrary key value of
None
.
function sqlalchemy.orm.keyfunc_mapping(keyfunc: _F, *, ignore_unpopulated_attribute: bool = False) → Type[[_KT, Any]]
A dictionary-based collection type with arbitrary keying.
Changed in version 2.0: Renamed mapped_collection to .
Returns a KeyFuncDict factory with a keying function generated from keyfunc, a callable that takes an entity and returns a key value.
the given keyfunc is called only once at the time that the target object is being added to the collection. Changes to the effective value returned by the function are not tracked.
See also
- background on use
Parameters:
keyfunc – a callable that will be passed the ORM-mapped instance which should then generate a new key to use in the dictionary. If the value returned is LoaderCallableStatus.NO_VALUE, an error is raised.
ignore_unpopulated_attribute –
if True, and the callable returns for a particular instance, the operation will be silently skipped. By default, an error is raised.
New in version 2.0: an error is raised by default if the callable being used for the dictionary key returns LoaderCallableStatus.NO_VALUE, which in an ORM attribute context indicates an attribute that was never populated with any value. The
mapped_collection.ignore_unpopulated_attribute
parameter may be set which will instead indicate that this condition should be ignored, and the append operation silently skipped. This is in contrast to the behavior of the 1.x series which would erroneously populate the value in the dictionary with an arbitrary key value ofNone
.
sqlalchemy.orm.attribute_mapped_collection = <function attribute_keyed_dict>
A dictionary-based collection type with attribute-based keying.
Changed in version 2.0: Renamed to attribute_keyed_dict().
Returns a factory which will produce new dictionary keys based on the value of a particular named attribute on ORM mapped instances to be added to the dictionary.
Note
the value of the target attribute must be assigned with its value at the time that the object is being added to the dictionary collection. Additionally, changes to the key attribute are not tracked, which means the key in the dictionary is not automatically synchronized with the key value on the target object itself. See Dealing with Key Mutations and back-populating for Dictionary collections for further details.
See also
- background on use
Parameters:
attr_name – string name of an ORM-mapped attribute on the mapped class, the value of which on a particular instance is to be used as the key for a new dictionary entry for that instance.
ignore_unpopulated_attribute –
if True, and the target attribute on an object is not populated at all, the operation will be silently skipped. By default, an error is raised.
New in version 2.0: an error is raised by default if the attribute being used for the dictionary key is determined that it was never populated with any value. The attribute_keyed_dict.ignore_unpopulated_attribute parameter may be set which will instead indicate that this condition should be ignored, and the append operation silently skipped. This is in contrast to the behavior of the 1.x series which would erroneously populate the value in the dictionary with an arbitrary key value of
None
.
sqlalchemy.orm.column_mapped_collection = <function column_keyed_dict>
A dictionary-based collection type with column-based keying.
Changed in version 2.0: Renamed to column_keyed_dict
.
Returns a KeyFuncDict factory which will produce new dictionary keys based on the value of a particular -mapped attribute on ORM mapped instances to be added to the dictionary.
Note
the value of the target attribute must be assigned with its value at the time that the object is being added to the dictionary collection. Additionally, changes to the key attribute are not tracked, which means the key in the dictionary is not automatically synchronized with the key value on the target object itself. See Dealing with Key Mutations and back-populating for Dictionary collections for further details.
See also
- background on use
Parameters:
mapping_spec – a Column object that is expected to be mapped by the target mapper to a particular attribute on the mapped class, the value of which on a particular instance is to be used as the key for a new dictionary entry for that instance.
ignore_unpopulated_attribute –
if True, and the mapped attribute indicated by the given target attribute on an object is not populated at all, the operation will be silently skipped. By default, an error is raised.
New in version 2.0: an error is raised by default if the attribute being used for the dictionary key is determined that it was never populated with any value. The column_keyed_dict.ignore_unpopulated_attribute parameter may be set which will instead indicate that this condition should be ignored, and the append operation silently skipped. This is in contrast to the behavior of the 1.x series which would erroneously populate the value in the dictionary with an arbitrary key value of
None
.
sqlalchemy.orm.mapped_collection = <function keyfunc_mapping>
A dictionary-based collection type with arbitrary keying.
Changed in version 2.0: Renamed to keyfunc_mapping().
Returns a factory with a keying function generated from keyfunc, a callable that takes an entity and returns a key value.
Note
the given keyfunc is called only once at the time that the target object is being added to the collection. Changes to the effective value returned by the function are not tracked.
See also
Dictionary Collections - background on use
Parameters:
keyfunc – a callable that will be passed the ORM-mapped instance which should then generate a new key to use in the dictionary. If the value returned is , an error is raised.
ignore_unpopulated_attribute –
if True, and the callable returns LoaderCallableStatus.NO_VALUE for a particular instance, the operation will be silently skipped. By default, an error is raised.
New in version 2.0: an error is raised by default if the callable being used for the dictionary key returns , which in an ORM attribute context indicates an attribute that was never populated with any value. The
mapped_collection.ignore_unpopulated_attribute
parameter may be set which will instead indicate that this condition should be ignored, and the append operation silently skipped. This is in contrast to the behavior of the 1.x series which would erroneously populate the value in the dictionary with an arbitrary key value ofNone
.
class sqlalchemy.orm.KeyFuncDict
Base for ORM mapped dictionary classes.
Extends the dict
type with additional methods needed by SQLAlchemy ORM collection classes. Use of KeyFuncDict is most directly by using the or column_keyed_dict() class factories. may also serve as the base for user-defined custom dictionary classes.
Changed in version 2.0: Renamed MappedCollection
to KeyFuncDict.
See also
Custom Collection Implementations
Members
, clear(), , popitem(), , set(), , update()
Class signature
class (builtins.dict
, typing.Generic
)
method sqlalchemy.orm.KeyFuncDict.__init__(keyfunc: _F, *, ignore_unpopulated_attribute: bool = False) → None
Create a new collection with keying provided by keyfunc.
keyfunc may be any callable that takes an object and returns an object for use as a dictionary key.
The keyfunc will be called every time the ORM needs to add a member by value-only (such as when loading instances from the database) or remove a member. The usual cautions about dictionary keying apply-
keyfunc(object)
should return the same output for the life of the collection. Keying based on mutable properties can result in unreachable instances “lost” in the collection.method clear() → None. Remove all items from D.
method sqlalchemy.orm.KeyFuncDict.pop(k[, d]) → v, remove specified key and return the corresponding value.
If the key is not found, return the default if given; otherwise, raise a KeyError.
method popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
method sqlalchemy.orm.KeyFuncDict.remove(value: _KT, _sa_initiator: Optional[] = None) → None
Remove an item by value, consulting the keyfunc for the key.
method sqlalchemy.orm.KeyFuncDict.set(value: _KT, _sa_initiator: Optional[] = None) → None
Add an item by value, consulting the keyfunc for the key.
method sqlalchemy.orm.KeyFuncDict.setdefault(key, default=None)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
method update([E, ]**F) → None. Update D from dict/iterable E and F.
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
sqlalchemy.orm.MappedCollection = <class ‘sqlalchemy.orm.mapped_collection.KeyFuncDict’>
Base for ORM mapped dictionary classes.
Extends the dict
type with additional methods needed by SQLAlchemy ORM collection classes. Use of KeyFuncDict is most directly by using the or column_keyed_dict() class factories. may also serve as the base for user-defined custom dictionary classes.
Changed in version 2.0: Renamed MappedCollection
to KeyFuncDict.
See also
Custom Collection Implementations
Object Name | Description |
---|---|
bulk_replace(values, existing_adapter, new_adapter[, initiator]) | |
Decorators for entity collection classes. | |
attrgetter(attr, …) –> attrgetter object | |
Bridges between the ORM and arbitrary Python collections. | |
An instrumented version of the built-in dict. | |
An instrumented version of the built-in list. | |
An instrumented version of the built-in set. | |
(factory) | Prepare a callable for future use as a collection class factory. |
function sqlalchemy.orm.collections.bulk_replace(values, existing_adapter, new_adapter, initiator=None)
Load a new collection, firing events based on prior like membership.
Appends instances in values
onto the new_adapter
. Events will be fired for any instance not present in the existing_adapter
. Any instances in existing_adapter
not present in values
will have remove events fired upon them.
Parameters:
values – An iterable of collection member instances
existing_adapter – A CollectionAdapter of instances to be replaced
new_adapter – An empty to load with
values
class sqlalchemy.orm.collections.collection
Decorators for entity collection classes.
The decorators fall into two groups: annotations and interception recipes.
The annotating decorators (appender, remover, iterator, converter, internally_instrumented) indicate the method’s purpose and take no arguments. They are not written with parens:
The recipe decorators all require parens, even those that take no arguments:
Members
adds(), , converter(), , iterator(), , removes(), , replaces()
@collection.adds('entity')
def insert(self, position, entity): ...
@collection.removes_return()
def popitem(self): ...
method static adds(arg)
Mark the method as adding an entity to the collection.
Adds “add to collection” handling to the method. The decorator argument indicates which method argument holds the SQLAlchemy-relevant value. Arguments can be specified positionally (i.e. integer) or by name:
@collection.adds(1)
def push(self, item): ...
@collection.adds('entity')
def do_stuff(self, thing, entity=None): ...
method sqlalchemy.orm.collections.collection.static appender(fn)
Tag the method as the collection appender.
The appender method is called with one positional argument: the value to append. The method will be automatically decorated with ‘adds(1)’ if not already decorated:
@collection.appender
def add(self, append): ...
# or, equivalently
@collection.appender
@collection.adds(1)
def add(self, append): ...
# for mapping type, an 'append' may kick out a previous value
# that occupies that slot. consider d['a'] = 'foo'- any previous
# value in d['a'] is discarded.
@collection.appender
@collection.replaces(1)
def add(self, entity):
key = some_key_func(entity)
previous = None
if key in self:
previous = self[key]
self[key] = entity
return previous
If the value to append is not allowed in the collection, you may raise an exception. Something to remember is that the appender will be called for each object mapped by a database query. If the database contains rows that violate your collection semantics, you will need to get creative to fix the problem, as access via the collection will not work.
If the appender method is internally instrumented, you must also receive the keyword argument ‘_sa_initiator’ and ensure its promulgation to collection events.
method static converter(fn)
Tag the method as the collection converter.
Deprecated since version 1.3: The collection.converter() handler is deprecated and will be removed in a future release. Please refer to the
bulk_replace
listener interface in conjunction with the function.This optional method will be called when a collection is being replaced entirely, as in:
myobj.acollection = [newvalue1, newvalue2]
The converter method will receive the object being assigned and should return an iterable of values suitable for use by the
appender
method. A converter must not assign values or mutate the collection, its sole job is to adapt the value the user provides into an iterable of values for the ORM’s use.The default converter implementation will use duck-typing to do the conversion. A dict-like collection will be convert into an iterable of dictionary values, and other types will simply be iterated:
@collection.converter
def convert(self, other): ...
If the duck-typing of the object does not match the type of this collection, a TypeError is raised.
Supply an implementation of this method if you want to expand the range of possible types that can be assigned in bulk or perform validation on the values about to be assigned.
method sqlalchemy.orm.collections.collection.static internally_instrumented(fn)
Tag the method as instrumented.
This tag will prevent any decoration from being applied to the method. Use this if you are orchestrating your own calls to
collection_adapter()
in one of the basic SQLAlchemy interface methods, or to prevent an automatic ABC method decoration from wrapping your implementation:# normally an 'extend' method on a list-like class would be
# automatically intercepted and re-implemented in terms of
# SQLAlchemy events and append(). your implementation will
# never be called, unless:
@collection.internally_instrumented
def extend(self, items): ...
method static iterator(fn)
Tag the method as the collection remover.
The iterator method is called with no arguments. It is expected to return an iterator over all collection members:
@collection.iterator
def __iter__(self): ...
method sqlalchemy.orm.collections.collection.static remover(fn)
Tag the method as the collection remover.
The remover method is called with one positional argument: the value to remove. The method will be automatically decorated with if not already decorated:
@collection.remover
def zap(self, entity): ...
# or, equivalently
@collection.remover
@collection.removes_return()
def zap(self, ): ...
If the value to remove is not present in the collection, you may raise an exception or return None to ignore the error.
If the remove method is internally instrumented, you must also receive the keyword argument ‘_sa_initiator’ and ensure its promulgation to collection events.
method sqlalchemy.orm.collections.collection.static removes(arg)
Mark the method as removing an entity in the collection.
Adds “remove from collection” handling to the method. The decorator argument indicates which method argument holds the SQLAlchemy-relevant value to be removed. Arguments can be specified positionally (i.e. integer) or by name:
@collection.removes(1)
def zap(self, item): ...
For methods where the value to remove is not known at call-time, use collection.removes_return.
method static removes_return()
Mark the method as removing an entity in the collection.
Adds “remove from collection” handling to the method. The return value of the method, if any, is considered the value to remove. The method arguments are not inspected:
@collection.removes_return()
def pop(self): ...
For methods where the value to remove is known at call-time, use collection.remove.
method sqlalchemy.orm.collections.collection.static replaces(arg)
Mark the method as replacing an entity in the collection.
Adds “add to collection” and “remove from collection” handling to the method. The decorator argument indicates which method argument holds the SQLAlchemy-relevant value to be added, and return value, if any will be considered the value to remove.
Arguments can be specified positionally (i.e. integer) or by name:
def __setitem__(self, index, item): ...
sqlalchemy.orm.collections.collection_adapter = operator.attrgetter(‘_sa_adapter’)
attrgetter(attr, …) –> attrgetter object
Return a callable object that fetches the given attribute(s) from its operand. After f = attrgetter(‘name’), the call f(r) returns r.name. After g = attrgetter(‘name’, ‘date’), the call g(r) returns (r.name, r.date). After h = attrgetter(‘name.first’, ‘name.last’), the call h(r) returns (r.name.first, r.name.last).
class sqlalchemy.orm.collections.CollectionAdapter
Bridges between the ORM and arbitrary Python collections.
Proxies base-level collection operations (append, remove, iterate) to the underlying Python collection, and emits add/remove events for entities entering or leaving the collection.
The ORM uses exclusively for interaction with entity collections.
class sqlalchemy.orm.collections.InstrumentedDict
An instrumented version of the built-in dict.
Class signature
class sqlalchemy.orm.collections.InstrumentedDict (builtins.dict
, typing.Generic
)
class sqlalchemy.orm.collections.InstrumentedList
An instrumented version of the built-in list.
Class signature
class (builtins.list
, typing.Generic
)
class sqlalchemy.orm.collections.InstrumentedSet
An instrumented version of the built-in set.
Class signature
class sqlalchemy.orm.collections.InstrumentedSet (, typing.Generic
)
function sqlalchemy.orm.collections.prepare_instrumentation(factory: Union[Type[Collection[Any]], Callable[[], _AdaptedCollectionProtocol]]) → Callable[[], _AdaptedCollectionProtocol]
Prepare a callable for future use as a collection class factory.
Given a collection class factory (either a type or no-arg callable), return another factory that will produce compatible instances when called.