PostgreSQL

    The following table summarizes current support levels for database release versions.

    The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.

    Sequences/SERIAL/IDENTITY

    PostgreSQL supports sequences, and SQLAlchemy uses these as the default means of creating new primary key values for integer-based primary key columns. When creating tables, SQLAlchemy will issue the SERIAL datatype for integer-based primary key columns, which generates a sequence and server side default corresponding to the column.

    To specify a specific named sequence to be used for primary key generation, use the construct:

    When SQLAlchemy issues a single INSERT statement, to fulfill the contract of having the “last insert identifier” available, a RETURNING clause is added to the INSERT statement which specifies the primary key columns should be returned after the statement completes. The RETURNING functionality only takes place if PostgreSQL 8.2 or later is in use. As a fallback approach, the sequence, whether specified explicitly or implicitly via SERIAL, is executed independently beforehand, the returned value to be used in the subsequent insert. Note that when an insert() construct is executed using “executemany” semantics, the “last inserted identifier” functionality does not apply; no RETURNING clause is emitted nor is the sequence pre-executed in this case.

    PostgreSQL 10 and above have a new IDENTITY feature that supersedes the use of SERIAL. The Identity construct in a can be used to control its behavior:

    1. from sqlalchemy import Table, Column, MetaData, Integer, Computed
    2. metadata = MetaData()
    3. data = Table(
    4. "data",
    5. metadata,
    6. Column(
    7. 'id', Integer, Identity(start=42, cycle=True), primary_key=True
    8. ),
    9. Column('data', String)
    10. )

    The CREATE TABLE for the above Table object would be:

    1. CREATE TABLE data (
    2. id INTEGER GENERATED BY DEFAULT AS IDENTITY (START WITH 42 CYCLE),
    3. data VARCHAR,
    4. PRIMARY KEY (id)
    5. )

    Changed in version 1.4: Added construct in a Column to specify the option of an autoincrementing column.

    Note

    Previous versions of SQLAlchemy did not have built-in support for rendering of IDENTITY, and could use the following compilation hook to replace occurrences of SERIAL with IDENTITY:

    1. from sqlalchemy.schema import CreateColumn
    2. from sqlalchemy.ext.compiler import compiles
    3. @compiles(CreateColumn, 'postgresql')
    4. def use_identity(element, compiler, **kw):
    5. text = compiler.visit_create_column(element, **kw)
    6. text = text.replace(
    7. "SERIAL", "INT GENERATED BY DEFAULT AS IDENTITY"
    8. )
    9. return text

    Using the above, a table such as:

    1. t = Table(
    2. 't', m,
    3. Column('id', Integer, primary_key=True),
    4. Column('data', String)
    5. )

    Will generate on the backing database as:

    1. CREATE TABLE t (
    2. id INT GENERATED BY DEFAULT AS IDENTITY,
    3. data VARCHAR,
    4. PRIMARY KEY (id)
    5. )

    Server Side Cursors

    Server-side cursor support is available for the psycopg2, asyncpg dialects and may also be available in others.

    Server side cursors are enabled on a per-statement basis by using the Connection.execution_options.stream_results connection execution option:

    1. with engine.connect() as conn:
    2. result = conn.execution_options(stream_results=True).execute(text("select * from table"))

    Note that some kinds of SQL statements may not be supported with server side cursors; generally, only SQL statements that return rows should be used with this option.

    Deprecated since version 1.4: The dialect-level server_side_cursors flag is deprecated and will be removed in a future release. Please use the execution option for unbuffered cursor support.

    See also

    Using Server Side Cursors (a.k.a. stream results)

    Transaction Isolation Level

    Most SQLAlchemy dialects support setting of transaction isolation level using the create_engine.isolation_level parameter at the level, and at the Connection level via the parameter.

    For PostgreSQL dialects, this feature works either by making use of the DBAPI-specific features, such as psycopg2’s isolation level flags which will embed the isolation level setting inline with the "BEGIN" statement, or for DBAPIs with no direct support by emitting SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL <level> ahead of the "BEGIN" statement emitted by the DBAPI. For the special AUTOCOMMIT isolation level, DBAPI-specific techniques are used which is typically an .autocommit flag on the DBAPI connection object.

    To set isolation level using create_engine():

    1. engine = create_engine(
    2. "postgresql+pg8000://scott:tiger@localhost/test",
    3. isolation_level = "REPEATABLE READ"
    4. )

    To set using per-connection execution options:

    1. with engine.connect() as conn:
    2. conn = conn.execution_options(
    3. isolation_level="REPEATABLE READ"
    4. )
    5. with conn.begin():
    6. # ... work with transaction

    There are also more options for isolation level configurations, such as “sub-engine” objects linked to a main which each apply different isolation level settings. See the discussion at Setting Transaction Isolation Levels including DBAPI Autocommit for background.

    Valid values for isolation_level on most PostgreSQL dialects include:

    • READ COMMITTED

    • READ UNCOMMITTED

    • REPEATABLE READ

    • SERIALIZABLE

    • AUTOCOMMIT

    See also

    Setting READ ONLY / DEFERRABLE

    pg8000 Transaction Isolation Level

    Setting READ ONLY / DEFERRABLE

    Most PostgreSQL dialects support setting the “READ ONLY” and “DEFERRABLE” characteristics of the transaction, which is in addition to the isolation level setting. These two attributes can be established either in conjunction with or independently of the isolation level by passing the postgresql_readonly and postgresql_deferrable flags with Connection.execution_options(). The example below illustrates passing the "SERIALIZABLE" isolation level at the same time as setting “READ ONLY” and “DEFERRABLE”:

    1. with engine.connect() as conn:
    2. conn = conn.execution_options(
    3. isolation_level="SERIALIZABLE",
    4. postgresql_readonly=True,
    5. postgresql_deferrable=True
    6. )
    7. with conn.begin():
    8. # ... work with transaction

    Note that some DBAPIs such as asyncpg only support “readonly” with SERIALIZABLE isolation.

    New in version 1.4: added support for the postgresql_readonly and postgresql_deferrable execution options.

    Temporary Table / Resource Reset for Connection Pooling

    The QueuePool connection pool implementation used by the SQLAlchemy object includes reset on return behavior that will invoke the DBAPI .rollback() method when connections are returned to the pool. While this rollback will clear out the immediate state used by the previous transaction, it does not cover a wider range of session-level state, including temporary tables as well as other server state such as prepared statement handles and statement caches. The PostgreSQL database includes a variety of commands which may be used to reset this state, including DISCARD, RESET, DEALLOCATE, and UNLISTEN.

    To install one or more of these commands as the means of performing reset-on-return, the event hook may be used, as demonstrated in the example below. The implementation will end transactions in progress as well as discard temporary tables using the CLOSE, RESET and DISCARD commands; see the PostgreSQL documentation for background on what each of these statements do.

    The create_engine.pool_reset_on_return parameter is set to None so that the custom scheme can replace the default behavior completely. The custom hook implementation calls .rollback() in any case, as it’s usually important that the DBAPI’s own tracking of commit/rollback will remain consistent with the state of the transaction:

    1. from sqlalchemy import create_engine
    2. from sqlalchemy import event
    3. postgresql_engine = create_engine(
    4. "postgresql+pyscopg2://scott:tiger@hostname/dbname",
    5. # disable default reset-on-return scheme
    6. pool_reset_on_return=None,
    7. )
    8. @event.listens_for(postgresql_engine, "reset")
    9. def _reset_postgresql(dbapi_connection, connection_record, reset_state):
    10. if not reset_state.terminate_only:
    11. dbapi_connection.execute("CLOSE ALL")
    12. dbapi_connection.execute("RESET ALL")
    13. dbapi_connection.execute("DISCARD TEMP")
    14. # so that the DBAPI itself knows that the connection has been
    15. # reset
    16. dbapi_connection.rollback()

    Changed in version 2.0.0b3: Added additional state arguments to the event and additionally ensured the event is invoked for all “reset” occurrences, so that it’s appropriate as a place for custom “reset” handlers. Previous schemes which use the PoolEvents.checkin() handler remain usable as well.

    See also

    - in the Connection Pooling documentation

    Setting Alternate Search Paths on Connect

    The PostgreSQL search_path variable refers to the list of schema names that will be implicitly referred towards when a particular table or other object is referenced in a SQL statement. As detailed in the next section Remote-Schema Table Introspection and PostgreSQL search_path, SQLAlchemy is generally organized around the concept of keeping this variable at its default value of public, however, in order to have it set to any arbitrary name or names when connections are used automatically, the “SET SESSION search_path” command may be invoked for all connections in a pool using the following event handler, as discussed at :

    1. from sqlalchemy import event
    2. from sqlalchemy import create_engine
    3. engine = create_engine("postgresql+psycopg2://scott:tiger@host/dbname")
    4. @event.listens_for(engine, "connect", insert=True)
    5. def set_search_path(dbapi_connection, connection_record):
    6. existing_autocommit = dbapi_connection.autocommit
    7. dbapi_connection.autocommit = True
    8. cursor = dbapi_connection.cursor()
    9. cursor.execute("SET SESSION search_path='%s'" % schema_name)
    10. cursor.close()
    11. dbapi_connection.autocommit = existing_autocommit

    The reason the recipe is complicated by use of the .autocommit DBAPI attribute is so that when the SET SESSION search_path directive is invoked, it is invoked outside of the scope of any transaction and therefore will not be reverted when the DBAPI connection has a rollback.

    See also

    Setting a Default Schema for New Connections - in the documentation

    Remote-Schema Table Introspection and PostgreSQL search_path

    Section Best Practices Summarized

    keep the search_path variable set to its default of public, without any other schema names. For other schema names, name these explicitly within definitions. Alternatively, the postgresql_ignore_search_path option will cause all reflected Table objects to have a attribute set up.

    The PostgreSQL dialect can reflect tables from any schema, as outlined in Reflecting Tables from Other Schemas.

    With regards to tables which these objects refer to via foreign key constraint, a decision must be made as to how the .schema is represented in those remote tables, in the case where that remote schema name is also a member of the current PostgreSQL search path.

    By default, the PostgreSQL dialect mimics the behavior encouraged by PostgreSQL’s own pg_get_constraintdef() builtin procedure. This function returns a sample definition for a particular foreign key constraint, omitting the referenced schema name from that definition when the name is also in the PostgreSQL schema search path. The interaction below illustrates this behavior:

    1. test=> CREATE TABLE test_schema.referred(id INTEGER PRIMARY KEY);
    2. CREATE TABLE
    3. test=> CREATE TABLE referring(
    4. test(> id INTEGER PRIMARY KEY,
    5. test(> referred_id INTEGER REFERENCES test_schema.referred(id));
    6. CREATE TABLE
    7. test=> SET search_path TO public, test_schema;
    8. test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
    9. test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
    10. test-> ON n.oid = c.relnamespace
    11. test-> JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
    12. test-> WHERE c.relname='referring' AND r.contype = 'f'
    13. test-> ;
    14. pg_get_constraintdef
    15. ---------------------------------------------------
    16. FOREIGN KEY (referred_id) REFERENCES referred(id)
    17. (1 row)

    Above, we created a table referred as a member of the remote schema test_schema, however when we added test_schema to the PG search_path and then asked pg_get_constraintdef() for the FOREIGN KEY syntax, test_schema was not included in the output of the function.

    On the other hand, if we set the search path back to the typical default of public:

    1. test=> SET search_path TO public;
    2. SET

    The same query against pg_get_constraintdef() now returns the fully schema-qualified name for us:

    1. test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
    2. test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
    3. test-> ON n.oid = c.relnamespace
    4. test-> JOIN pg_catalog.pg_constraint r ON c.oid = r.conrelid
    5. test-> WHERE c.relname='referring' AND r.contype = 'f';
    6. pg_get_constraintdef
    7. ---------------------------------------------------------------
    8. FOREIGN KEY (referred_id) REFERENCES test_schema.referred(id)
    9. (1 row)

    SQLAlchemy will by default use the return value of pg_get_constraintdef() in order to determine the remote schema name. That is, if our search_path were set to include test_schema, and we invoked a table reflection process as follows:

    1. >>> from sqlalchemy import Table, MetaData, create_engine, text
    2. >>> engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test")
    3. >>> with engine.connect() as conn:
    4. ... conn.execute(text("SET search_path TO test_schema, public"))
    5. ... metadata_obj = MetaData()
    6. ... referring = Table('referring', metadata_obj,
    7. ... autoload_with=conn)
    8. ...
    9. <sqlalchemy.engine.result.CursorResult object at 0x101612ed0>

    The above process would deliver to the collection referred table named without the schema:

    1. >>> metadata_obj.tables['referred'].schema is None
    2. True

    To alter the behavior of reflection such that the referred schema is maintained regardless of the search_path setting, use the postgresql_ignore_search_path option, which can be specified as a dialect-specific argument to both Table as well as :

    1. >>> with engine.connect() as conn:
    2. ... conn.execute(text("SET search_path TO test_schema, public"))
    3. ... metadata_obj = MetaData()
    4. ... referring = Table('referring', metadata_obj,
    5. ... autoload_with=conn,
    6. ... postgresql_ignore_search_path=True)
    7. ...
    8. <sqlalchemy.engine.result.CursorResult object at 0x1016126d0>

    We will now have test_schema.referred stored as schema-qualified:

    1. >>> metadata_obj.tables['test_schema.referred'].schema
    2. 'test_schema'

    Best Practices for PostgreSQL Schema reflection

    The description of PostgreSQL schema reflection behavior is complex, and is the product of many years of dealing with widely varied use cases and user preferences. But in fact, there’s no need to understand any of it if you just stick to the simplest use pattern: leave the search_path set to its default of public only, never refer to the name public as an explicit schema name otherwise, and refer to all other schema names explicitly when building up a Table object. The options described here are only for those users who can’t, or prefer not to, stay within these guidelines.

    Note that in all cases, the “default” schema is always reflected as None. The “default” schema on PostgreSQL is that which is returned by the PostgreSQL current_schema() function. On a typical PostgreSQL installation, this is the name public. So a table that refers to another which is in the public (i.e. default) schema will always have the .schema attribute set to None.

    See also

    - discussion of the issue from a backend-agnostic perspective

    The Schema Search Path - on the PostgreSQL website.

    INSERT/UPDATE…RETURNING

    The dialect supports PG 8.2’s INSERT..RETURNING, UPDATE..RETURNING and DELETE..RETURNING syntaxes. INSERT..RETURNING is used by default for single-row INSERT statements in order to fetch newly generated primary key identifiers. To specify an explicit RETURNING clause, use the _UpdateBase.returning() method on a per-statement basis:

    1. # INSERT..RETURNING
    2. result = table.insert().returning(table.c.col1, table.c.col2).\
    3. values(name='foo')
    4. print(result.fetchall())
    5. # UPDATE..RETURNING
    6. result = table.update().returning(table.c.col1, table.c.col2).\
    7. where(table.c.name=='foo').values(name='bar')
    8. print(result.fetchall())
    9. # DELETE..RETURNING
    10. result = table.delete().returning(table.c.col1, table.c.col2).\
    11. where(table.c.name=='foo')
    12. print(result.fetchall())

    INSERT…ON CONFLICT (Upsert)

    Starting with version 9.5, PostgreSQL allows “upserts” (update or insert) of rows into a table via the ON CONFLICT clause of the INSERT statement. A candidate row will only be inserted if that row does not violate any unique constraints. In the case of a unique constraint violation, a secondary action can occur which can be either “DO UPDATE”, indicating that the data in the target row should be updated, or “DO NOTHING”, which indicates to silently skip this row.

    Conflicts are determined using existing unique constraints and indexes. These constraints may be identified either using their name as stated in DDL, or they may be inferred by stating the columns and conditions that comprise the indexes.

    SQLAlchemy provides ON CONFLICT support via the PostgreSQL-specific function, which provides the generative methods Insert.on_conflict_do_update() and :

    1. >>> from sqlalchemy.dialects.postgresql import insert
    2. >>> insert_stmt = insert(my_table).values(
    3. ... id='some_existing_id',
    4. ... data='inserted value')
    5. >>> do_nothing_stmt = insert_stmt.on_conflict_do_nothing(
    6. ... index_elements=['id']
    7. ... )
    8. >>> print(do_nothing_stmt)
    9. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
    10. ON CONFLICT (id) DO NOTHING
    11. >>> do_update_stmt = insert_stmt.on_conflict_do_update(
    12. ... constraint='pk_my_table',
    13. ... set_=dict(data='updated value')
    14. ... )
    15. >>> print(do_update_stmt)
    16. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
    17. ON CONFLICT ON CONSTRAINT pk_my_table DO UPDATE SET data = %(param_1)s

    New in version 1.1.

    See also

    INSERT .. ON CONFLICT - in the PostgreSQL documentation.

    Specifying the Target

    Both methods supply the “target” of the conflict using either the named constraint or by column inference:

    • The Insert.on_conflict_do_update.index_elements argument specifies a sequence containing string column names, objects, and/or SQL expression elements, which would identify a unique index:

      1. >>> do_update_stmt = insert_stmt.on_conflict_do_update(
      2. ... index_elements=['id'],
      3. ... set_=dict(data='updated value')
      4. ... )
      5. >>> print(do_update_stmt)
      6. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
      7. ON CONFLICT (id) DO UPDATE SET data = %(param_1)s
      8. >>> do_update_stmt = insert_stmt.on_conflict_do_update(
      9. ... index_elements=[my_table.c.id],
      10. ... set_=dict(data='updated value')
      11. ... )
      12. >>> print(do_update_stmt)
      13. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
      14. ON CONFLICT (id) DO UPDATE SET data = %(param_1)s
    • When using Insert.on_conflict_do_update.index_elements to infer an index, a partial index can be inferred by also specifying the use the parameter:

      1. >>> stmt = insert(my_table).values(user_email='a@b.com', data='inserted data')
      2. >>> stmt = stmt.on_conflict_do_update(
      3. ... index_elements=[my_table.c.user_email],
      4. ... index_where=my_table.c.user_email.like('%@gmail.com'),
      5. ... set_=dict(data=stmt.excluded.data)
      6. ... )
      7. >>> print(stmt)
      8. INSERT INTO my_table (data, user_email)
      9. VALUES (%(data)s, %(user_email)s) ON CONFLICT (user_email)
      10. WHERE user_email LIKE %(user_email_1)s DO UPDATE SET data = excluded.data
    • The Insert.on_conflict_do_update.constraint argument is used to specify an index directly rather than inferring it. This can be the name of a UNIQUE constraint, a PRIMARY KEY constraint, or an INDEX:

      1. >>> do_update_stmt = insert_stmt.on_conflict_do_update(
      2. ... constraint='my_table_idx_1',
      3. ... set_=dict(data='updated value')
      4. ... )
      5. >>> print(do_update_stmt)
      6. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
      7. ON CONFLICT ON CONSTRAINT my_table_idx_1 DO UPDATE SET data = %(param_1)s
      8. >>> do_update_stmt = insert_stmt.on_conflict_do_update(
      9. ... constraint='my_table_pk',
      10. ... set_=dict(data='updated value')
      11. ... )
      12. >>> print(do_update_stmt)
      13. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
      14. ON CONFLICT ON CONSTRAINT my_table_pk DO UPDATE SET data = %(param_1)s
    • The argument may also refer to a SQLAlchemy construct representing a constraint, e.g. UniqueConstraint, , Index, or . In this use, if the constraint has a name, it is used directly. Otherwise, if the constraint is unnamed, then inference will be used, where the expressions and optional WHERE clause of the constraint will be spelled out in the construct. This use is especially convenient to refer to the named or unnamed primary key of a Table using the attribute:

      1. >>> do_update_stmt = insert_stmt.on_conflict_do_update(
      2. ... constraint=my_table.primary_key,
      3. ... set_=dict(data='updated value')
      4. ... )
      5. >>> print(do_update_stmt)
      6. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
      7. ON CONFLICT (id) DO UPDATE SET data = %(param_1)s

    The SET Clause

    ON CONFLICT...DO UPDATE is used to perform an update of the already existing row, using any combination of new values as well as values from the proposed insertion. These values are specified using the parameter. This parameter accepts a dictionary which consists of direct values for UPDATE:

    1. >>> stmt = insert(my_table).values(id='some_id', data='inserted value')
    2. >>> do_update_stmt = stmt.on_conflict_do_update(
    3. ... index_elements=['id'],
    4. ... set_=dict(data='updated value')
    5. ... )
    6. >>> print(do_update_stmt)
    7. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
    8. ON CONFLICT (id) DO UPDATE SET data = %(param_1)s

    Warning

    The Insert.on_conflict_do_update() method does not take into account Python-side default UPDATE values or generation functions, e.g. those specified using Column.onupdate. These values will not be exercised for an ON CONFLICT style of UPDATE, unless they are manually specified in the dictionary.

    Updating using the Excluded INSERT Values

    In order to refer to the proposed insertion row, the special alias is available as an attribute on the Insert object; this object is a which alias contains all columns of the target table:

    1. >>> stmt = insert(my_table).values(
    2. ... id='some_id',
    3. ... data='inserted value',
    4. ... author='jlh'
    5. ... )
    6. >>> do_update_stmt = stmt.on_conflict_do_update(
    7. ... index_elements=['id'],
    8. ... set_=dict(data='updated value', author=stmt.excluded.author)
    9. ... )
    10. >>> print(do_update_stmt)
    11. INSERT INTO my_table (id, data, author)
    12. VALUES (%(id)s, %(data)s, %(author)s)
    13. ON CONFLICT (id) DO UPDATE SET data = %(param_1)s, author = excluded.author

    Additional WHERE Criteria

    The Insert.on_conflict_do_update() method also accepts a WHERE clause using the parameter, which will limit those rows which receive an UPDATE:

    1. >>> stmt = insert(my_table).values(
    2. ... id='some_id',
    3. ... data='inserted value',
    4. ... author='jlh'
    5. ... )
    6. >>> on_update_stmt = stmt.on_conflict_do_update(
    7. ... index_elements=['id'],
    8. ... set_=dict(data='updated value', author=stmt.excluded.author),
    9. ... where=(my_table.c.status == 2)
    10. ... )
    11. >>> print(on_update_stmt)
    12. INSERT INTO my_table (id, data, author)
    13. VALUES (%(id)s, %(data)s, %(author)s)
    14. ON CONFLICT (id) DO UPDATE SET data = %(param_1)s, author = excluded.author
    15. WHERE my_table.status = %(status_1)s

    Skipping Rows with DO NOTHING

    ON CONFLICT may be used to skip inserting a row entirely if any conflict with a unique or exclusion constraint occurs; below this is illustrated using the method:

    1. >>> stmt = insert(my_table).values(id='some_id', data='inserted value')
    2. >>> stmt = stmt.on_conflict_do_nothing(index_elements=['id'])
    3. >>> print(stmt)
    4. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
    5. ON CONFLICT (id) DO NOTHING

    If DO NOTHING is used without specifying any columns or constraint, it has the effect of skipping the INSERT for any unique or exclusion constraint violation which occurs:

    1. >>> stmt = insert(my_table).values(id='some_id', data='inserted value')
    2. >>> stmt = stmt.on_conflict_do_nothing()
    3. >>> print(stmt)
    4. INSERT INTO my_table (id, data) VALUES (%(id)s, %(data)s)
    5. ON CONFLICT DO NOTHING

    PostgreSQL’s full text search system is available through the use of the namespace, combined with the use of custom operators via the Operators.bool_op() method. For simple cases with some degree of cross-backend compatibility, the Operators.match() operator may also be used.

    Simple plain text matching with match()

    The Operators.match() operator provides for cross-compatible simple text matching. For the PostgreSQL backend, it’s hardcoded to generate an expression using the @@ operator in conjunction with the plainto_tsquery() PostgreSQL function.

    On the PostgreSQL dialect, an expression like the following:

    1. select(sometable.c.text.match("search string"))

    would emit to the database:

    1. SELECT text @@ plainto_tsquery('search string') FROM table

    Above, passing a plain string to Operators.match() will automatically make use of plainto_tsquery() to specify the type of tsquery. This establishes basic database cross-compatibility for Operators.match() with other backends.

    Changed in version 2.0: The default tsquery generation function used by the PostgreSQL dialect with Operators.match() is plainto_tsquery().

    To render exactly what was rendered in 1.4, use the following form:

    1. from sqlalchemy import func
    2. select(
    3. sometable.c.text.bool_op("@@")(func.to_tsquery("search string"))
    4. )

    Which would emit:

    1. SELECT text @@ to_tsquery('search string') FROM table

    Using PostgreSQL full text functions and operators directly

    Text search operations beyond the simple use of Operators.match() may make use of the namespace to generate PostgreSQL full-text functions, in combination with Operators.bool_op() to generate any boolean operator.

    For example, the query:

    1. select(
    2. func.to_tsquery('cat').bool_op("@>")(func.to_tsquery('cat & rat'))
    3. )

    would generate:

    1. SELECT to_tsquery('cat') @> to_tsquery('cat & rat')

    The type can provide for explicit CAST:

    1. from sqlalchemy.dialects.postgresql import TSVECTOR
    2. from sqlalchemy import select, cast
    3. select(cast("some text", TSVECTOR))

    produces a statement equivalent to:

    1. SELECT CAST('some text' AS TSVECTOR) AS anon_1

    The func namespace is augmented by the PostgreSQL dialect to set up correct argument and return types for most full text search functions. These functions are used automatically by the sqlalchemy.sql.expression.func namespace assuming the sqlalchemy.dialects.postgresql package has been imported, or has been invoked using a postgresql dialect. These functions are documented at:

    Specifying the “regconfig” with match() or custom operators

    PostgreSQL’s plainto_tsquery() function accepts an optional “regconfig” argument that is used to instruct PostgreSQL to use a particular pre-computed GIN or GiST index in order to perform the search. When using Operators.match(), this additional parameter may be specified using the postgresql_regconfig parameter, such as:

    1. select(mytable.c.id).where(
    2. mytable.c.title.match('somestring', postgresql_regconfig='english')
    3. )

    Which would emit:

    1. SELECT mytable.id FROM mytable
    2. WHERE mytable.title @@ plainto_tsquery('english', 'somestring')

    When using other PostgreSQL search functions with , the “regconfig” parameter may be passed directly as the initial argument:

    1. select(mytable.c.id).where(
    2. func.to_tsvector("english", mytable.c.title).bool_op("@@")(
    3. func.to_tsquery("english", "somestring")
    4. )
    5. )

    produces a statement equivalent to:

    1. SELECT mytable.id FROM mytable
    2. WHERE to_tsvector('english', mytable.title) @@
    3. to_tsquery('english', 'somestring')

    It is recommended that you use the EXPLAIN ANALYZE... tool from PostgreSQL to ensure that you are generating queries with SQLAlchemy that take full advantage of any indexes you may have created for full text search.

    See also

    Full Text Search - in the PostgreSQL documentation

    FROM ONLY …

    The dialect supports PostgreSQL’s ONLY keyword for targeting only a particular table in an inheritance hierarchy. This can be used to produce the SELECT ... FROM ONLY, UPDATE ONLY ..., and DELETE FROM ONLY ... syntaxes. It uses SQLAlchemy’s hints mechanism:

    1. # SELECT ... FROM ONLY ...
    2. result = table.select().with_hint(table, 'ONLY', 'postgresql')
    3. print(result.fetchall())
    4. # UPDATE ONLY ...
    5. table.update(values=dict(foo='bar')).with_hint('ONLY',
    6. dialect_name='postgresql')
    7. # DELETE FROM ONLY ...
    8. table.delete().with_hint('ONLY', dialect_name='postgresql')

    PostgreSQL-Specific Index Options

    Several extensions to the construct are available, specific to the PostgreSQL dialect.

    Covering Indexes

    The postgresql_include option renders INCLUDE(colname) for the given string names:

    1. Index("my_index", table.c.x, postgresql_include=['y'])

    would render the index as CREATE INDEX my_index ON table (x) INCLUDE (y)

    Note that this feature requires PostgreSQL 11 or later.

    New in version 1.4.

    Partial Indexes

    Partial indexes add criterion to the index definition so that the index is applied to a subset of rows. These can be specified on Index using the postgresql_where keyword argument:

    1. Index('my_index', my_table.c.id, postgresql_where=my_table.c.value > 10)

    Operator Classes

    PostgreSQL allows the specification of an operator class for each column of an index (see https://www.postgresql.org/docs/current/interactive/indexes-opclass.html). The construct allows these to be specified via the postgresql_ops keyword argument:

    1. Index(
    2. 'my_index', my_table.c.id, my_table.c.data,
    3. postgresql_ops={
    4. 'data': 'text_pattern_ops',
    5. 'id': 'int4_ops'
    6. })

    Note that the keys in the postgresql_ops dictionaries are the “key” name of the Column, i.e. the name used to access it from the .c collection of , which can be configured to be different than the actual name of the column as expressed in the database.

    If postgresql_ops is to be used against a complex SQL expression such as a function call, then to apply to the column it must be given a label that is identified in the dictionary by name, e.g.:

    1. Index(
    2. 'my_index', my_table.c.id,
    3. func.lower(my_table.c.data).label('data_lower'),
    4. postgresql_ops={
    5. 'data_lower': 'text_pattern_ops',
    6. 'id': 'int4_ops'
    7. })

    Operator classes are also supported by the ExcludeConstraint construct using the parameter. See that parameter for details.

    New in version 1.3.21: added support for operator classes with ExcludeConstraint.

    Index Types

    PostgreSQL provides several index types: B-Tree, Hash, GiST, and GIN, as well as the ability for users to create their own (see https://www.postgresql.org/docs/current/static/indexes-types.html). These can be specified on using the postgresql_using keyword argument:

    1. Index('my_index', my_table.c.data, postgresql_using='gin')

    The value passed to the keyword argument will be simply passed through to the underlying CREATE INDEX command, so it must be a valid index type for your version of PostgreSQL.

    Index Storage Parameters

    PostgreSQL allows storage parameters to be set on indexes. The storage parameters available depend on the index method used by the index. Storage parameters can be specified on using the postgresql_with keyword argument:

      New in version 1.0.6.

      PostgreSQL allows to define the tablespace in which to create the index. The tablespace can be specified on Index using the postgresql_tablespace keyword argument:

      1. Index('my_index', my_table.c.data, postgresql_tablespace='my_tablespace')

      New in version 1.1.

      Note that the same option is available on as well.

      Indexes with CONCURRENTLY

      The PostgreSQL index option CONCURRENTLY is supported by passing the flag postgresql_concurrently to the construct:

      1. tbl = Table('testtbl', m, Column('data', Integer))
      2. idx1 = Index('test_idx1', tbl.c.data, postgresql_concurrently=True)

      The above index construct will render DDL for CREATE INDEX, assuming PostgreSQL 8.2 or higher is detected or for a connection-less dialect, as:

      1. CREATE INDEX CONCURRENTLY test_idx1 ON testtbl (data)

      For DROP INDEX, assuming PostgreSQL 9.2 or higher is detected or for a connection-less dialect, it will emit:

      1. DROP INDEX CONCURRENTLY test_idx1

      New in version 1.1: support for CONCURRENTLY on DROP INDEX. The CONCURRENTLY keyword is now only emitted if a high enough version of PostgreSQL is detected on the connection (or for a connection-less dialect).

      When using CONCURRENTLY, the PostgreSQL database requires that the statement be invoked outside of a transaction block. The Python DBAPI enforces that even for a single statement, a transaction is present, so to use this construct, the DBAPI’s “autocommit” mode must be used:

      1. metadata = MetaData()
      2. table = Table(
      3. "foo", metadata,
      4. Column("id", String))
      5. index = Index(
      6. "foo_idx", table.c.id, postgresql_concurrently=True)
      7. with engine.connect() as conn:
      8. with conn.execution_options(isolation_level='AUTOCOMMIT'):
      9. table.create(conn)

      See also

      Transaction Isolation Level

      PostgreSQL Index Reflection

      The PostgreSQL database creates a UNIQUE INDEX implicitly whenever the UNIQUE CONSTRAINT construct is used. When inspecting a table using Inspector, the and the Inspector.get_unique_constraints() will report on these two constructs distinctly; in the case of the index, the key duplicates_constraint will be present in the index entry if it is detected as mirroring a constraint. When performing reflection using Table(..., autoload_with=engine), the UNIQUE INDEX is not returned in when it is detected as mirroring a UniqueConstraint in the collection .

      Changed in version 1.0.0: - Table reflection now includes objects present in the Table.constraints collection; the PostgreSQL backend will no longer include a “mirrored” construct in Table.indexes if it is detected as corresponding to a unique constraint.

      Special Reflection Options

      The Inspector used for the PostgreSQL backend is an instance of , which offers additional methods:

      Object NameDescription

      PGInspector

      class sqlalchemy.dialects.postgresql.base.PGInspector

      Members

      , get_enums(), , get_table_oid(),

      Class signature

      class sqlalchemy.dialects.postgresql.base.PGInspector ()

      • method sqlalchemy.dialects.postgresql.base.PGInspector.get_domains(schema: Optional[str] = None) → List[ReflectedDomain]

        Return a list of DOMAIN objects.

        Each member is a dictionary containing these fields:

        • Parameters:

          schema – schema name. If None, the default schema (typically ‘public’) is used. May also be set to '*' to indicate load domains for all schemas.

        New in version 2.0.

      • method get_enums(schema: Optional[str] = None) → List[ReflectedEnum]

        Return a list of ENUM objects.

        Each member is a dictionary containing these fields:

        • name - name of the enum

        • schema - the schema name for the enum.

        • visible - boolean, whether or not this enum is visible in the default search path.

        • labels - a list of string labels that apply to the enum.

        • Parameters:

          schema – schema name. If None, the default schema (typically ‘public’) is used. May also be set to '*' to indicate load enums for all schemas.

        New in version 1.0.0.

      • method sqlalchemy.dialects.postgresql.base.PGInspector.get_foreign_table_names(schema: Optional[str] = None) → List[str]

        Return a list of FOREIGN TABLE names.

        Behavior is similar to that of , except that the list is limited to those tables that report a relkind value of f.

        New in version 1.0.0.

      • method sqlalchemy.dialects.postgresql.base.PGInspector.get_table_oid(table_name: str, schema: Optional[str] = None) → int

        Return the OID for the given table name.

        • Parameters:

          • table_name – string name of the table. For special quoting, use .

          • schema – string schema name; if omitted, uses the default schema of the database connection. For special quoting, use quoted_name.

      • method has_type(type_name: str, schema: Optional[str] = None, **kw: Any) → bool

        Return if the database has the specified type in the provided schema.

        • Parameters:

          • type_name – the type to check.

          • schema – schema name. If None, the default schema (typically ‘public’) is used. May also be set to '*' to check in all schemas.

      1. New in version 2.0.

      PostgreSQL Table Options

      Several options for CREATE TABLE are supported directly by the PostgreSQL dialect in conjunction with the construct:

      • TABLESPACE:

        1. Table("some_table", metadata, ..., postgresql_tablespace='some_tablespace')

        The above option is also available on the Index construct.

      • ON COMMIT:

        1. Table("some_table", metadata, ..., postgresql_on_commit='PRESERVE ROWS')
      • WITH OIDS:

        1. Table("some_table", metadata, ..., postgresql_with_oids=True)
      • WITHOUT OIDS:

        1. Table("some_table", metadata, ..., postgresql_with_oids=False)
      • INHERITS:

        1. Table("some_table", metadata, ..., postgresql_inherits="some_supertable")
        2. Table("some_table", metadata, ..., postgresql_inherits=("t1", "t2", ...))
        3. .. versionadded:: 1.0.0
      • PARTITION BY:

        1. Table("some_table", metadata, ...,
        2. postgresql_partition_by='LIST (part_column)')
        3. .. versionadded:: 1.2.6

      See also

      - in the PostgreSQL documentation.

      PostgreSQL Constraint Options

      The following option(s) are supported by the PostgreSQL dialect in conjunction with selected constraint constructs:

      • NOT VALID: This option applies towards CHECK and FOREIGN KEY constraints when the constraint is being added to an existing table via ALTER TABLE, and has the effect that existing rows are not scanned during the ALTER operation against the constraint being added.

        When using a SQL migration tool such as that renders ALTER TABLE constructs, the postgresql_not_valid argument may be specified as an additional keyword argument within the operation that creates the constraint, as in the following Alembic example:

        1. def update():
        2. op.create_foreign_key(
        3. "fk_user_address",
        4. "address",
        5. "user",
        6. ["user_id"],
        7. ["id"],
        8. postgresql_not_valid=True
        9. )

        The keyword is ultimately accepted directly by the CheckConstraint, and ForeignKey constructs; when using a tool like Alembic, dialect-specific keyword arguments are passed through to these constructs from the migration operation directives:

        1. CheckConstraint("some_field IS NOT NULL", postgresql_not_valid=True)
        2. ForeignKeyConstraint(["some_id"], ["some_table.some_id"], postgresql_not_valid=True)

        New in version 1.4.32.

        See also

        - in the PostgreSQL documentation.

      Table values, Table and Column valued functions, Row and Tuple objects

      PostgreSQL makes great use of modern SQL forms such as table-valued functions, tables and rows as values. These constructs are commonly used as part of PostgreSQL’s support for complex datatypes such as JSON, ARRAY, and other datatypes. SQLAlchemy’s SQL expression language has native support for most table-valued and row-valued forms.

      Table-Valued Functions

      Many PostgreSQL built-in functions are intended to be used in the FROM clause of a SELECT statement, and are capable of returning table rows or sets of table rows. A large portion of PostgreSQL’s JSON functions for example such as json_array_elements(), json_object_keys(), json_each_text(), json_each(), json_to_record(), json_populate_recordset() use such forms. These classes of SQL function calling forms in SQLAlchemy are available using the FunctionElement.table_valued() method in conjunction with objects generated from the func namespace.

      Examples from PostgreSQL’s reference documentation follow below:

      • json_each():

        1. >>> from sqlalchemy import select, func
        2. >>> stmt = select(func.json_each('{"a":"foo", "b":"bar"}').table_valued("key", "value"))
        3. >>> print(stmt)
        4. SELECT anon_1.key, anon_1.value
        5. FROM json_each(:json_each_1) AS anon_1
      • json_populate_record():

        1. >>> from sqlalchemy import select, func, literal_column
        2. >>> stmt = select(
        3. ... func.json_populate_record(
        4. ... literal_column("null::myrowtype"),
        5. ... '{"a":1,"b":2}'
        6. ... ).table_valued("a", "b", name="x")
        7. ... )
        8. >>> print(stmt)
        9. SELECT x.a, x.b
        10. FROM json_populate_record(null::myrowtype, :json_populate_record_1) AS x
      • json_to_record() - this form uses a PostgreSQL specific form of derived columns in the alias, where we may make use of elements with types to produce them. The FunctionElement.table_valued() method produces a construct, and the method TableValuedAlias.render_derived() method sets up the derived columns specification:

        1. >>> from sqlalchemy import select, func, column, Integer, Text
        2. >>> stmt = select(
        3. ... func.json_to_record('{"a":1,"b":[1,2,3],"c":"bar"}').table_valued(
        4. ... column("a", Integer), column("b", Text), column("d", Text),
        5. ... ).render_derived(name="x", with_types=True)
        6. ... )
        7. >>> print(stmt)
        8. SELECT x.a, x.b, x.d
        9. FROM json_to_record(:json_to_record_1) AS x(a INTEGER, b TEXT, d TEXT)
      • WITH ORDINALITY - part of the SQL standard, WITH ORDINALITY adds an ordinal counter to the output of a function and is accepted by a limited set of PostgreSQL functions including unnest() and generate_series(). The method accepts a keyword parameter with_ordinality for this purpose, which accepts the string name that will be applied to the “ordinality” column:

        1. >>> from sqlalchemy import select, func
        2. >>> stmt = select(
        3. ... func.generate_series(4, 1, -1).
        4. ... table_valued("value", with_ordinality="ordinality").
        5. ... render_derived()
        6. ... )
        7. >>> print(stmt)
        8. SELECT anon_1.value, anon_1.ordinality
        9. FROM generate_series(:generate_series_1, :generate_series_2, :generate_series_3)
        10. WITH ORDINALITY AS anon_1(value, ordinality)

      New in version 1.4.0b2.

      See also

      Table-Valued Functions - in the

      Similar to the table valued function, a column valued function is present in the FROM clause, but delivers itself to the columns clause as a single scalar value. PostgreSQL functions such as json_array_elements(), unnest() and generate_series() may use this form. Column valued functions are available using the method of FunctionElement:

      • json_array_elements():

        1. >>> from sqlalchemy import select, func
        2. >>> stmt = select(func.json_array_elements('["one", "two"]').column_valued("x"))
        3. >>> print(stmt)
        4. SELECT x
        5. FROM json_array_elements(:json_array_elements_1) AS x
      • unnest() - in order to generate a PostgreSQL ARRAY literal, the construct may be used:

        1. >>> from sqlalchemy.dialects.postgresql import array
        2. >>> from sqlalchemy import select, func
        3. >>> stmt = select(func.unnest(array([1, 2])).column_valued())
        4. >>> print(stmt)
        5. SELECT anon_1
        6. FROM unnest(ARRAY[%(param_1)s, %(param_2)s]) AS anon_1

        The function can of course be used against an existing table-bound column that’s of type ARRAY:

        1. >>> from sqlalchemy import table, column, ARRAY, Integer
        2. >>> from sqlalchemy import select, func
        3. >>> t = table("t", column('value', ARRAY(Integer)))
        4. >>> stmt = select(func.unnest(t.c.value).column_valued("unnested_value"))
        5. >>> print(stmt)
        6. SELECT unnested_value
        7. FROM unnest(t.value) AS unnested_value

      See also

      - in the SQLAlchemy Unified Tutorial

      Row Types

      Built-in support for rendering a ROW may be approximated using func.ROW with the sqlalchemy.func namespace, or by using the tuple_() construct:

      1. >>> from sqlalchemy import table, column, func, tuple_
      2. >>> t = table("t", column("id"), column("fk"))
      3. >>> stmt = t.select().where(
      4. ... tuple_(t.c.id, t.c.fk) > (1,2)
      5. ... ).where(
      6. ... func.ROW(t.c.id, t.c.fk) < func.ROW(3, 7)
      7. ... )
      8. >>> print(stmt)
      9. SELECT t.id, t.fk
      10. FROM t
      11. WHERE (t.id, t.fk) > (:param_1, :param_2) AND ROW(t.id, t.fk) < ROW(:ROW_1, :ROW_2)

      See also

      PostgreSQL Row Constructor Comparison

      Table Types passed to Functions

      PostgreSQL supports passing a table as an argument to a function, which it refers towards as a “record” type. SQLAlchemy FromClause objects such as support this special form using the FromClause.table_valued() method, which is comparable to the FunctionElement.table_valued() method except that the collection of columns is already established by that of the itself:

      1. >>> from sqlalchemy import table, column, func, select
      2. >>> a = table( "a", column("id"), column("x"), column("y"))
      3. >>> stmt = select(func.row_to_json(a.table_valued()))
      4. >>> print(stmt)
      5. SELECT row_to_json(a) AS row_to_json_1
      6. FROM a

      New in version 1.4.0b2.

      ARRAY Types

      The PostgreSQL dialect supports arrays, both as multidimensional column types as well as array literals:

      • - ARRAY datatype

      • array - array literal

      • - ARRAY_AGG SQL function

      • aggregate_order_by - helper for PG’s ORDER BY aggregate function syntax.

      JSON Types

      The PostgreSQL dialect supports both JSON and JSONB datatypes, including psycopg2’s native support and support for all of PostgreSQL’s special operators:

      The PostgreSQL HSTORE type as well as hstore literals are supported:

      • HSTORE - HSTORE datatype

      • - hstore literal

      ENUM Types

      PostgreSQL has an independently creatable TYPE structure which is used to implement an enumerated type. This approach introduces significant complexity on the SQLAlchemy side in terms of when this type should be CREATED and DROPPED. The type object is also an independently reflectable entity. The following sections should be consulted:

      Using ENUM with ARRAY

      The combination of ENUM and ARRAY is not directly supported by backend DBAPIs at this time. Prior to SQLAlchemy 1.3.17, a special workaround was needed in order to allow this combination to work, described below.

      Changed in version 1.3.17: The combination of ENUM and ARRAY is now directly handled by SQLAlchemy’s implementation without any workarounds needed.

      1. from sqlalchemy import TypeDecorator
      2. from sqlalchemy.dialects.postgresql import ARRAY
      3. class ArrayOfEnum(TypeDecorator):
      4. impl = ARRAY
      5. def bind_expression(self, bindvalue):
      6. return sa.cast(bindvalue, self)
      7. def result_processor(self, dialect, coltype):
      8. super_rp = super(ArrayOfEnum, self).result_processor(dialect, coltype)
      9. def handle_raw_string(value):
      10. inner = re.match(r"^{(.*)}$", value).group(1)
      11. return inner.split(",") if inner else []
      12. def process(value):
      13. if value is None:
      14. return None
      15. return super_rp(handle_raw_string(value))
      16. return process

      E.g.:

      1. Table(
      2. "mydata",
      3. metadata,
      4. Column("id", Integer, primary_key=True),
      5. Column("data", ArrayOfEnum(ENUM("a", "b", "c", name="myenum"))),
      6. )

      This type is not included as a built-in type as it would be incompatible with a DBAPI that suddenly decides to support ARRAY of ENUM directly in a new version.

      Using JSON/JSONB with ARRAY

      Similar to using ENUM, prior to SQLAlchemy 1.3.17, for an ARRAY of JSON/JSONB we need to render the appropriate CAST. Current psycopg2 drivers accommodate the result set correctly without any special steps.

      Changed in version 1.3.17: The combination of JSON/JSONB and ARRAY is now directly handled by SQLAlchemy’s implementation without any workarounds needed.

      1. class CastingArray(ARRAY):
      2. def bind_expression(self, bindvalue):
      3. return sa.cast(bindvalue, self)

      E.g.:

      1. Table(
      2. "mydata",
      3. metadata,
      4. Column("id", Integer, primary_key=True),
      5. Column("data", CastingArray(JSONB)),
      6. )

      Range and Multirange Types

      PostgreSQL range and multirange types are supported for the psycopg2, psycopg, and asyncpg dialects.

      Data values being passed to the database may be passed as string values or by using the Range data object.

      New in version 2.0: Added the backend-agnostic object used to indicate ranges. The psycopg2-specific range classes are no longer exposed and are only used internally by that particular dialect.

      E.g. an example of a fully typed model using the TSRANGE datatype:

      1. from datetime import datetime
      2. from sqlalchemy.dialects.postgresql import Range
      3. from sqlalchemy.dialects.postgresql import TSRANGE
      4. from sqlalchemy.orm import DeclarativeBase
      5. from sqlalchemy.orm import Mapped
      6. from sqlalchemy.orm import mapped_column
      7. class Base(DeclarativeBase):
      8. pass
      9. class RoomBooking(Base):
      10. __tablename__ = "room_booking"
      11. id: Mapped[int] = mapped_column(primary_key=True)
      12. room: Mapped[str]
      13. during: Mapped[Range[datetime]] = mapped_column(TSRANGE)

      To represent data for the during column above, the type is a simple dataclass that will represent the bounds of the range. Below illustrates an INSERT of a row into the above room_booking table:

      1. from sqlalchemy import create_engine
      2. from sqlalchemy.orm import Session
      3. engine = create_engine("postgresql+psycopg://scott:tiger@pg14/dbname")
      4. Base.metadata.create_all(engine)
      5. with Session(engine) as session:
      6. booking = RoomBooking(
      7. room="101", during=Range(datetime(2013, 3, 23), datetime(2013, 3, 25))
      8. )
      9. session.add(booking)
      10. session.commit()

      Selecting from any range column will also return Range objects as indicated:

      1. from sqlalchemy import select
      2. with Session(engine) as session:
      3. for row in session.execute(select(RoomBooking.during)):
      4. print(row)

      The available range datatypes are as follows:

      class sqlalchemy.dialects.postgresql.Range

      Represent a PostgreSQL range.

      E.g.:

      1. r = Range(10, 50, bounds="()")

      The calling style is similar to that of psycopg and psycopg2, in part to allow easier migration from previous SQLAlchemy versions that used these objects directly.

      • Parameters:

        • lower – Lower bound value, or None

        • upper – Upper bound value, or None

        • bounds – keyword-only, optional string value that is one of "()", "[)", "(]", "[]". Defaults to "[)".

        • empty – keyword-only, optional bool indicating this is an “empty” range

      New in version 2.0.

      Members

      , adjacent_to(), , contains(), , is_empty, , lower, , lower_inf, , not_extend_right_of(), , strictly_left_of(), , union(), , upper_inc,

      Class signature

      class sqlalchemy.dialects.postgresql.Range (typing.Generic)

      Multiranges

      Multiranges are supported by PostgreSQL 14 and above. SQLAlchemy’s multirange datatypes deal in lists of types.

      New in version 2.0: Added support for MULTIRANGE datatypes. In contrast to the psycopg multirange feature, SQLAlchemy’s adaptation represents a multirange datatype as a list of Range objects.

      The example below illustrates use of the datatype:

      1. from datetime import datetime
      2. from typing import List
      3. from sqlalchemy.dialects.postgresql import Range
      4. from sqlalchemy.dialects.postgresql import TSMULTIRANGE
      5. from sqlalchemy.orm import DeclarativeBase
      6. from sqlalchemy.orm import Mapped
      7. from sqlalchemy.orm import mapped_column
      8. class Base(DeclarativeBase):
      9. pass
      10. class EventCalendar(Base):
      11. __tablename__ = "event_calendar"
      12. id: Mapped[int] = mapped_column(primary_key=True)
      13. event_name: Mapped[str]
      14. in_session_periods: Mapped[List[Range[datetime]]] = mapped_column(TSMULTIRANGE)

      Illustrating insertion and selecting of a record:

      1. from sqlalchemy import create_engine
      2. from sqlalchemy import select
      3. from sqlalchemy.orm import Session
      4. engine = create_engine("postgresql+psycopg://scott:tiger@pg14/test")
      5. Base.metadata.create_all(engine)
      6. with Session(engine) as session:
      7. calendar = EventCalendar(
      8. event_name="SQLAlchemy Tutorial Sessions",
      9. in_session_periods=[
      10. Range(datetime(2013, 3, 23), datetime(2013, 3, 25)),
      11. Range(datetime(2013, 4, 12), datetime(2013, 4, 15)),
      12. Range(datetime(2013, 5, 9), datetime(2013, 5, 12)),
      13. ],
      14. )
      15. session.add(calendar)
      16. session.commit()
      17. for multirange in session.scalars(select(EventCalendar.in_session_periods)):
      18. for range_ in multirange:
      19. print(f"Start: {range_.lower} End: {range_.upper}")

      Note

      In the above example, the list of Range types as handled by the ORM will not automatically detect in-place changes to a particular list value; to update list values with the ORM, either re-assign a new list to the attribute, or use the type modifier. See the section Mutation Tracking for background.

      The available multirange datatypes are as follows:

      PostgreSQL Data Types

      As with all SQLAlchemy dialects, all UPPERCASE types that are known to be valid with PostgreSQL are importable from the top level dialect, whether they originate from sqlalchemy.types or from the local dialect:

      1. from sqlalchemy.dialects.postgresql import (
      2. ARRAY,
      3. BIGINT,
      4. BIT,
      5. BOOLEAN,
      6. BYTEA,
      7. CHAR,
      8. CIDR,
      9. DATE,
      10. DOUBLE_PRECISION,
      11. ENUM,
      12. FLOAT,
      13. HSTORE,
      14. INET,
      15. INTEGER,
      16. INTERVAL,
      17. JSON,
      18. JSONB,
      19. MACADDR,
      20. MACADDR8,
      21. MONEY,
      22. NUMERIC,
      23. OID,
      24. REAL,
      25. SMALLINT,
      26. TEXT,
      27. TIME,
      28. TIMESTAMP,
      29. UUID,
      30. VARCHAR,
      31. INT4RANGE,
      32. INT8RANGE,
      33. NUMRANGE,
      34. DATERANGE,
      35. TSRANGE,
      36. TSTZRANGE,
      37. REGCONFIG,
      38. REGCLASS,
      39. TSQUERY,
      40. TSVECTOR,
      41. )

      Types which are specific to PostgreSQL, or have PostgreSQL-specific construction arguments, are as follows:

      Object NameDescription

      base for PostgreSQL MULTIRANGE types

      AbstractRange

      Base for PostgreSQL RANGE types.

      PostgreSQL ARRAY type.

      BIT

      CIDR

      Represent the PostgreSQL DATEMULTIRANGE type.

      DATERANGE

      Represent the PostgreSQL DATERANGE type.

      Represent the DOMAIN PostgreSQL type.

      ENUM

      PostgreSQL ENUM type.

      Represent the PostgreSQL HSTORE type.

      INET

      Represent the PostgreSQL INT4MULTIRANGE type.

      INT4RANGE

      Represent the PostgreSQL INT4RANGE type.

      Represent the PostgreSQL INT8MULTIRANGE type.

      INT8RANGE

      Represent the PostgreSQL INT8RANGE type.

      PostgreSQL INTERVAL type.

      JSON

      Represent the PostgreSQL JSON type.

      Represent the PostgreSQL JSONB type.

      JSONPATH

      JSON Path Type.

      MACADDR8

      Provide the PostgreSQL MONEY type.

      NUMMULTIRANGE

      Represent the PostgreSQL NUMMULTIRANGE type.

      Represent the PostgreSQL NUMRANGE type.

      OID

      Provide the PostgreSQL OID type.

      Provide the PostgreSQL REGCLASS type.

      REGCONFIG

      Provide the PostgreSQL REGCONFIG type.

      PostgreSQL TIME type.

      TIMESTAMP

      Provide the PostgreSQL TIMESTAMP type.

      Represent the PostgreSQL TSRANGE type.

      TSQUERY

      Provide the PostgreSQL TSQUERY type.

      Represent the PostgreSQL TSRANGE type.

      TSTZMULTIRANGE

      Represent the PostgreSQL TSTZRANGE type.

      Represent the PostgreSQL TSTZRANGE type.

      TSVECTOR

      The type implements the PostgreSQL text search type TSVECTOR.

      class sqlalchemy.dialects.postgresql.AbstractRange

      Base for PostgreSQL RANGE types.

      See also

      PostgreSQL range functions

      Members

      , adjacent_to(), , contains(), , not_extend_left_of(), , overlaps(), , strictly_right_of(),

      Class signature

      class sqlalchemy.dialects.postgresql.AbstractRange ()

      • class comparator_factory

        Define comparison operations for range types.

        Class signature

        class sqlalchemy.dialects.postgresql.AbstractRange.comparator_factory (sqlalchemy.types.Comparator)

        • method __ne__(other: Any) → ColumnElement[bool]

          Boolean expression. Returns true if two ranges are not equal

        • Boolean expression. Returns true if the range in the column is adjacent to the range in the operand.

        • method contained_by(other: Any) → ColumnElement[bool]

          Boolean expression. Returns true if the column is contained within the right hand operand.

        • method contains(other: Any, **kw: Any) → ColumnElement[bool]

          Boolean expression. Returns true if the right hand operand, which can be an element or a range, is contained within the column.

          kwargs may be ignored by this operator but are required for API conformance.

        • method difference(other: Any) → ColumnElement[bool]

          Range expression. Returns the union of the two ranges. Will raise an exception if the resulting range is not contiguous.

        • method not_extend_left_of(other: Any) → ColumnElement[bool]

          Boolean expression. Returns true if the range in the column does not extend left of the range in the operand.

        • method not_extend_right_of(other: Any) → ColumnElement[bool]

          Boolean expression. Returns true if the range in the column does not extend right of the range in the operand.

        • method overlaps(other: Any) → ColumnElement[bool]

          Boolean expression. Returns true if the column overlaps (has points in common with) the right hand operand.

        • method strictly_left_of(other: Any) → ColumnElement[bool]

          Boolean expression. Returns true if the column is strictly left of the right hand operand.

        • method strictly_right_of(other: Any) → ColumnElement[bool]

          Boolean expression. Returns true if the column is strictly right of the right hand operand.

        • method union(other: Any) → ColumnElement[bool]

          Range expression. Returns the union of the two ranges. Will raise an exception if the resulting range is not contiguous.

      class sqlalchemy.dialects.postgresql.AbstractMultiRange

      base for PostgreSQL MULTIRANGE types

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractRange)

      class sqlalchemy.dialects.postgresql.ARRAY

      PostgreSQL ARRAY type.

      Changed in version 1.1: The type is now a subclass of the core ARRAY type.

      The type is constructed in the same way as the core ARRAY type; a member type is required, and a number of dimensions is recommended if the type is to be used for more than one dimension:

      1. from sqlalchemy.dialects import postgresql
      2. mytable = Table("mytable", metadata,
      3. Column("data", postgresql.ARRAY(Integer, dimensions=2))
      4. )

      The type provides all operations defined on the core ARRAY type, including support for “dimensions”, indexed access, and simple matching such as and Comparator.all(). class also provides PostgreSQL-specific methods for containment operations, including Comparator.contains() , and Comparator.overlap(), e.g.:

      1. mytable.c.data.contains([1, 2])

      The type may not be supported on all PostgreSQL DBAPIs; it is currently known to work on psycopg2 only.

      Additionally, the ARRAY type does not work directly in conjunction with the type. For a workaround, see the special type at Using ENUM with ARRAY.

      Detecting Changes in ARRAY columns when using the ORM

      The type, when used with the SQLAlchemy ORM, does not detect in-place mutations to the array. In order to detect these, the sqlalchemy.ext.mutable extension must be used, using the class:

      1. from sqlalchemy.dialects.postgresql import ARRAY
      2. from sqlalchemy.ext.mutable import MutableList
      3. class SomeOrmClass(Base):
      4. # ...
      5. data = Column(MutableList.as_mutable(ARRAY(Integer)))

      This extension will allow “in-place” changes such to the array such as .append() to produce events which will be detected by the unit of work. Note that changes to elements inside the array, including subarrays that are mutated in place, are not detected.

      Alternatively, assigning a new array value to an ORM element that replaces the old one will always trigger a change event.

      See also

      ARRAY - base array type

      - produces a literal array value.

      Members

      contained_by(), , overlap(),

      Class signature

      class sqlalchemy.dialects.postgresql.ARRAY ()

      • class Comparator

        Define comparison operations for ARRAY.

        Note that these operations are in addition to those provided by the base class, including Comparator.any() and .

        Class signature

        class sqlalchemy.dialects.postgresql.ARRAY.Comparator (sqlalchemy.types.Comparator)

        • method contained_by(other)

          Boolean expression. Test if elements are a proper subset of the elements of the argument array expression.

        • method sqlalchemy.dialects.postgresql.ARRAY.Comparator.contains(other, **kwargs)

          Boolean expression. Test if elements are a superset of the elements of the argument array expression.

          kwargs may be ignored by this operator but are required for API conformance.

        • method overlap(other)

          Boolean expression. Test if array has elements in common with an argument array expression.

      • method sqlalchemy.dialects.postgresql.ARRAY.__init__(item_type: _TypeEngineArgument[Any], as_tuple: bool = False, dimensions: Optional[int] = None, zero_indexes: bool = False)

        Construct an ARRAY.

        E.g.:

        1. Column('myarray', ARRAY(Integer))

        Arguments are:

        • Parameters:

          • item_type – The data type of items of this array. Note that dimensionality is irrelevant here, so multi-dimensional arrays like INTEGER[][], are constructed as ARRAY(Integer), not as ARRAY(ARRAY(Integer)) or such.

          • as_tuple=False – Specify whether return results should be converted to tuples from lists. DBAPIs such as psycopg2 return lists by default. When tuples are returned, the results are hashable.

          • dimensions – if non-None, the ARRAY will assume a fixed number of dimensions. This will cause the DDL emitted for this ARRAY to include the exact number of bracket clauses [], and will also optimize the performance of the type overall. Note that PG arrays are always implicitly “non-dimensioned”, meaning they can store any number of dimensions no matter how they were declared.

          • zero_indexes=False

            when True, index values will be converted between Python zero-based and PostgreSQL one-based indexes, e.g. a value of one will be added to all index values before passing to the database.

            New in version 0.9.5.

      class sqlalchemy.dialects.postgresql.BIT

      Class signature

      class (sqlalchemy.types.TypeEngine)

      class sqlalchemy.dialects.postgresql.BYTEA

      Members

      Class signature

      class sqlalchemy.dialects.postgresql.BYTEA ()

      • method sqlalchemy.dialects.postgresql.BYTEA.__init__(length: Optional[int] = None)

        inherited from the sqlalchemy.types.LargeBinary.__init__ method of

        Construct a LargeBinary type.

        • Parameters:

          length – optional, a length for the column for use in DDL statements, for those binary types that accept a length, such as the MySQL BLOB type.

      class sqlalchemy.dialects.postgresql.CIDR

      Class signature

      class sqlalchemy.dialects.postgresql.CIDR ()

      class sqlalchemy.dialects.postgresql.DOMAIN

      Represent the DOMAIN PostgreSQL type.

      A domain is essentially a data type with optional constraints that restrict the allowed set of values. E.g.:

      1. PositiveInt = Domain(
      2. "pos_int", Integer, check="VALUE > 0", not_null=True
      3. )
      4. UsPostalCode = Domain(
      5. "us_postal_code",
      6. Text,
      7. check="VALUE ~ '^\d{5}$' OR VALUE ~ '^\d{5}-\d{4}$'"
      8. )

      See the PostgreSQL documentation for additional details

      New in version 2.0.

      Members

      , create(),

      Class signature

      class sqlalchemy.dialects.postgresql.DOMAIN (sqlalchemy.dialects.postgresql.named_types.NamedType, )

      • method sqlalchemy.dialects.postgresql.DOMAIN.__init__(name: str, data_type: _TypeEngineArgument[Any], *, collation: Optional[str] = None, default: Optional[Union[str, ]] = None, constraint_name: Optional[str] = None, not_null: Optional[bool] = None, check: Optional[str] = None, create_type: bool = True, **kw: Any)

        Construct a DOMAIN.

        • Parameters:

          • name – the name of the domain

          • data_type – The underlying data type of the domain. This can include array specifiers.

          • collation – An optional collation for the domain. If no collation is specified, the underlying data type’s default collation is used. The underlying type must be collatable if collation is specified.

          • default – The DEFAULT clause specifies a default value for columns of the domain data type. The default should be a string or a text() value. If no default value is specified, then the default value is the null value.

          • constraint_name – An optional name for a constraint. If not specified, the backend generates a name.

          • not_null – Values of this domain are prevented from being null. By default domain are allowed to be null. If not specified no nullability clause will be emitted.

          • check – CHECK clause specify integrity constraint or test which values of the domain must satisfy. A constraint must be an expression producing a Boolean result that can use the key word VALUE to refer to the value being tested. Differently from PostgreSQL, only a single check clause is currently allowed in SQLAlchemy.

          • schema – optional schema name

          • metadata – optional object which this DOMAIN will be directly associated

          • create_type – Defaults to True. Indicates that CREATE TYPE should be emitted, after optionally checking for the presence of the type, when the parent table is being created; and additionally that DROP TYPE is called when the table is dropped.

      • method create(bind, checkfirst=True, **kw)

        inherited from the NamedType.create() method of NamedType

        Emit CREATE DDL for this type.

        • Parameters:

          • checkfirst – if True, a query against the PG catalog will be first performed to see if the type does not exist already before creating.

      • method sqlalchemy.dialects.postgresql.DOMAIN.drop(bind, checkfirst=True, **kw)

        inherited from the NamedType.drop() method of NamedType

        Emit DROP DDL for this type.

        • Parameters:

          • bind – a connectable , Connection, or similar object to emit SQL.

          • checkfirst – if True, a query against the PG catalog will be first performed to see if the type actually exists before dropping.

      class sqlalchemy.dialects.postgresql.DOUBLE_PRECISION

      The SQL DOUBLE PRECISION type.

      New in version 2.0.

      See also

      - documentation for the base type.

      Class signature

      class sqlalchemy.dialects.postgresql.DOUBLE_PRECISION (sqlalchemy.types.Double)

      • method __init__(precision: Optional[int] = None, asdecimal: bool = False, decimal_return_scale: Optional[int] = None)

        inherited from the sqlalchemy.types.Float.__init__ method of Float

        Construct a Float.

        • Parameters:

          • precision

            the numeric precision for use in DDL CREATE TABLE. Backends should attempt to ensure this precision indicates a number of digits for the generic datatype.

            Note

            For the Oracle backend, the Float.precision parameter is not accepted when rendering DDL, as Oracle does not support float precision specified as a number of decimal places. Instead, use the Oracle-specific datatype and specify the FLOAT.binary_precision parameter. This is new in version 2.0 of SQLAlchemy.

            To create a database agnostic that separately specifies binary precision for Oracle, use TypeEngine.with_variant() as follows:

            1. from sqlalchemy import Column
            2. from sqlalchemy import Float
            3. from sqlalchemy.dialects import oracle
            4. Column(
            5. "float_data",
            6. Float(5).with_variant(oracle.FLOAT(binary_precision=16), "oracle")
            7. )
          • asdecimal – the same flag as that of , but defaults to False. Note that setting this flag to True results in floating point conversion.

          • decimal_return_scale

            Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specifying this value will override that length. Note that the MySQL float types, which do include “scale”, will use “scale” as the default for decimal_return_scale, if not otherwise specified.

            New in version 0.9.0.

      class sqlalchemy.dialects.postgresql.ENUM

      PostgreSQL ENUM type.

      This is a subclass of Enum which includes support for PG’s CREATE TYPE and DROP TYPE.

      When the builtin type is used and the Enum.native_enum flag is left at its default of True, the PostgreSQL backend will use a type as the implementation, so the special create/drop rules will be used.

      The create/drop behavior of ENUM is necessarily intricate, due to the awkward relationship the ENUM type has in relationship to the parent table, in that it may be “owned” by just a single table, or may be shared among many tables.

      When using Enum or in an “inline” fashion, the CREATE TYPE and DROP TYPE is emitted corresponding to when the Table.create() and methods are called:

      1. table = Table('sometable', metadata,
      2. Column('some_enum', ENUM('a', 'b', 'c', name='myenum'))
      3. )
      4. table.create(engine) # will emit CREATE ENUM and CREATE TABLE
      5. table.drop(engine) # will emit DROP TABLE and DROP ENUM

      To use a common enumerated type between multiple tables, the best practice is to declare the Enum or independently, and associate it with the MetaData object itself:

      1. my_enum = ENUM('a', 'b', 'c', name='myenum', metadata=metadata)
      2. t1 = Table('sometable_one', metadata,
      3. Column('some_enum', myenum)
      4. )
      5. t2 = Table('sometable_two', metadata,
      6. Column('some_enum', myenum)
      7. )

      When this pattern is used, care must still be taken at the level of individual table creates. Emitting CREATE TABLE without also specifying checkfirst=True will still cause issues:

      1. t1.create(engine) # will fail: no such type 'myenum'

      If we specify checkfirst=True, the individual table-level create operation will check for the ENUM and create if not exists:

      1. # will check if enum exists, and emit CREATE TYPE if not
      2. t1.create(engine, checkfirst=True)

      When using a metadata-level ENUM type, the type will always be created and dropped if either the metadata-wide create/drop is called:

      1. metadata.create_all(engine) # will emit CREATE TYPE
      2. metadata.drop_all(engine) # will emit DROP TYPE

      The type can also be created and dropped directly:

      1. my_enum.create(engine)
      2. my_enum.drop(engine)

      Changed in version 1.0.0: The PostgreSQL type now behaves more strictly with regards to CREATE/DROP. A metadata-level ENUM type will only be created and dropped at the metadata level, not the table level, with the exception of table.create(checkfirst=True). The table.drop() call will now emit a DROP TYPE for a table-level enumerated type.

      Members

      __init__(), , drop()

      Class signature

      class (sqlalchemy.dialects.postgresql.named_types.NamedType, sqlalchemy.types.NativeForEmulated, sqlalchemy.types.Enum)

      • method __init__(*enums, name: str, create_type: bool = True, **kw)

        Construct an ENUM.

        Arguments are the same as that of , but also including the following parameters.

        • Parameters:

          create_type – Defaults to True. Indicates that CREATE TYPE should be emitted, after optionally checking for the presence of the type, when the parent table is being created; and additionally that DROP TYPE is called when the table is dropped. When False, no check will be performed and no CREATE TYPE or DROP TYPE is emitted, unless ENUM.create() or are called directly. Setting to False is helpful when invoking a creation scheme to a SQL file without access to the actual database - the ENUM.create() and methods can be used to emit SQL to a target bind.

      • method sqlalchemy.dialects.postgresql.ENUM.create(bind=None, checkfirst=True)

        Emit CREATE TYPE for this .

        If the underlying dialect does not support PostgreSQL CREATE TYPE, no action is taken.

        • Parameters:

          • bind – a connectable Engine, , or similar object to emit SQL.

          • checkfirst – if True, a query against the PG catalog will be first performed to see if the type does not exist already before creating.

      • method sqlalchemy.dialects.postgresql.ENUM.drop(bind=None, checkfirst=True)

        Emit DROP TYPE for this .

        If the underlying dialect does not support PostgreSQL DROP TYPE, no action is taken.

        • Parameters:

          • bind – a connectable Engine, , or similar object to emit SQL.

          • checkfirst – if True, a query against the PG catalog will be first performed to see if the type actually exists before dropping.

      class sqlalchemy.dialects.postgresql.HSTORE

      Represent the PostgreSQL HSTORE type.

      The HSTORE type stores dictionaries containing strings, e.g.:

      1. data_table = Table('data_table', metadata,
      2. Column('id', Integer, primary_key=True),
      3. Column('data', HSTORE)
      4. )
      5. with engine.connect() as conn:
      6. conn.execute(
      7. data_table.insert(),
      8. data = {"key1": "value1", "key2": "value2"}
      9. )

      provides for a wide range of operations, including:

      • Index operations:

        1. data_table.c.data['some key'] == 'some value'
      • Containment operations:

        1. data_table.c.data.has_key('some key')
        2. data_table.c.data.has_all(['one', 'two', 'three'])
      • Concatenation:

        1. data_table.c.data + {"k1": "v1"}

      For a full list of special methods see comparator_factory.

      Detecting Changes in HSTORE columns when using the ORM

      For usage with the SQLAlchemy ORM, it may be desirable to combine the usage of HSTORE with dictionary now part of the sqlalchemy.ext.mutable extension. This extension will allow “in-place” changes to the dictionary, e.g. addition of new keys or replacement/removal of existing keys to/from the current dictionary, to produce events which will be detected by the unit of work:

      1. from sqlalchemy.ext.mutable import MutableDict
      2. class MyClass(Base):
      3. __tablename__ = 'data_table'
      4. id = Column(Integer, primary_key=True)
      5. data = Column(MutableDict.as_mutable(HSTORE))
      6. my_object = session.query(MyClass).one()
      7. # in-place mutation, requires Mutable extension
      8. # in order for the ORM to detect
      9. my_object.data['some_key'] = 'some value'
      10. session.commit()

      When the extension is not used, the ORM will not be alerted to any changes to the contents of an existing dictionary, unless that dictionary value is re-assigned to the HSTORE-attribute itself, thus generating a change event.

      See also

      hstore - render the PostgreSQL hstore() function.

      Members

      , contained_by(), , defined(), , has_all(), , has_key(), , matrix(), , vals(), , bind_processor(), , hashable,

      Class signature

      class sqlalchemy.dialects.postgresql.HSTORE (, sqlalchemy.types.Concatenable, )

      • class Comparator

        Define comparison operations for HSTORE.

        Class signature

        class (sqlalchemy.types.Comparator, sqlalchemy.types.Comparator)

        • method sqlalchemy.dialects.postgresql.HSTORE.Comparator.array()

          Text array expression. Returns array of alternating keys and values.

        • method contained_by(other)

          Boolean expression. Test if keys are a proper subset of the keys of the argument jsonb expression.

        • method sqlalchemy.dialects.postgresql.HSTORE.Comparator.contains(other, **kwargs)

          Boolean expression. Test if keys (or array) are a superset of/contained the keys of the argument jsonb expression.

          kwargs may be ignored by this operator but are required for API conformance.

        • method defined(key)

          Boolean expression. Test for presence of a non-NULL value for the key. Note that the key may be a SQLA expression.

        • method sqlalchemy.dialects.postgresql.HSTORE.Comparator.delete(key)

          HStore expression. Returns the contents of this hstore with the given key deleted. Note that the key may be a SQLA expression.

        • method has_all(other)

          Boolean expression. Test for presence of all keys in jsonb

        • method sqlalchemy.dialects.postgresql.HSTORE.Comparator.has_any(other)

          Boolean expression. Test for presence of any key in jsonb

        • method has_key(other)

          Boolean expression. Test for presence of a key. Note that the key may be a SQLA expression.

        • method sqlalchemy.dialects.postgresql.HSTORE.Comparator.keys()

          Text array expression. Returns array of keys.

        • method matrix()

          Text array expression. Returns array of [key, value] pairs.

        • method sqlalchemy.dialects.postgresql.HSTORE.Comparator.slice(array)

          HStore expression. Returns a subset of an hstore defined by array of keys.

        • method vals()

          Text array expression. Returns array of values.

      • method sqlalchemy.dialects.postgresql.HSTORE.__init__(text_type=None)

        Construct a new .

        • Parameters:

          text_type

          the type that should be used for indexed values. Defaults to Text.

          New in version 1.1.0.

      • method bind_processor(dialect)

        Return a conversion function for processing bind values.

        Returns a callable which will receive a bind parameter value as the sole positional argument and will return a value to send to the DB-API.

        If processing is not necessary, the method should return None.

        Note

        This method is only called relative to a dialect specific type object, which is often private to a dialect in use and is not the same type object as the public facing one, which means it’s not feasible to subclass a TypeEngine class in order to provide an alternate method, unless subclassing the UserDefinedType class explicitly.

        To provide alternate behavior for , implement a TypeDecorator class and provide an implementation of .

        See also

        Augmenting Existing Types

        • Parameters:

          dialect – Dialect instance in use.

      • attribute comparator_factory

        alias of Comparator

      • attribute hashable = False

        Flag, if False, means values from this type aren’t hashable.

        Used by the ORM when uniquing result lists.

      • method sqlalchemy.dialects.postgresql.HSTORE.result_processor(dialect, coltype)

        Return a conversion function for processing result row values.

        Returns a callable which will receive a result row column value as the sole positional argument and will return a value to return to the user.

        If processing is not necessary, the method should return None.

        Note

        This method is only called relative to a dialect specific type object, which is often private to a dialect in use and is not the same type object as the public facing one, which means it’s not feasible to subclass a class in order to provide an alternate TypeEngine.result_processor() method, unless subclassing the class explicitly.

        To provide alternate behavior for TypeEngine.result_processor(), implement a class and provide an implementation of TypeDecorator.process_result_value().

        See also

        • Parameters:

          • dialect – Dialect instance in use.

          • coltype – DBAPI coltype argument received in cursor.description.

      class sqlalchemy.dialects.postgresql.INET

      Class signature

      class sqlalchemy.dialects.postgresql.INET ()

      class sqlalchemy.dialects.postgresql.INTERVAL

      PostgreSQL INTERVAL type.

      Members

      __init__()

      Class signature

      class (sqlalchemy.types.NativeForEmulated, sqlalchemy.types._AbstractInterval)

      • method sqlalchemy.dialects.postgresql.INTERVAL.__init__(precision=None, fields=None)

        Construct an INTERVAL.

        • Parameters:

          • precision – optional integer precision value

          • fields

            string fields specifier. allows storage of fields to be limited, such as "YEAR", "MONTH", , etc.

            New in version 1.2.

      class sqlalchemy.dialects.postgresql.JSON

      Represent the PostgreSQL JSON type.

      is used automatically whenever the base JSON datatype is used against a PostgreSQL backend, however base datatype does not provide Python accessors for PostgreSQL-specific comparison methods such as Comparator.astext(); additionally, to use PostgreSQL JSONB, the datatype should be used explicitly.

      See also

      JSON - main documentation for the generic cross-platform JSON datatype.

      The operators provided by the PostgreSQL version of include:

      • Index operations (the -> operator):

        1. data_table.c.data['some key']
        2. data_table.c.data[5]
      • Index operations returning text (the ->> operator):

        1. data_table.c.data['some key'].astext == 'some value'

        Note that equivalent functionality is available via the Comparator.as_string accessor.

      • Index operations with CAST (equivalent to CAST(col ->> ['some key'] AS <type>)):

        1. data_table.c.data['some key'].astext.cast(Integer) == 5

        Note that equivalent functionality is available via the and similar accessors.

      • Path index operations (the #> operator):

        1. data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')]
      • Path index operations returning text (the #>> operator):

        1. data_table.c.data[('key_1', 'key_2', 5, ..., 'key_n')].astext == 'some value'

      Changed in version 1.1: The ColumnElement.cast() operator on JSON objects now requires that the modifier be called explicitly, if the cast works only from a textual string.

      Index operations return an expression object whose type defaults to JSON by default, so that further JSON-oriented instructions may be called upon the result type.

      Custom serializers and deserializers are specified at the dialect level, that is using . The reason for this is that when using psycopg2, the DBAPI only allows serializers at the per-cursor or per-connection level. E.g.:

      1. engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test",
      2. json_serializer=my_serialize_fn,
      3. json_deserializer=my_deserialize_fn
      4. )

      When using the psycopg2 dialect, the json_deserializer is registered against the database using psycopg2.extras.register_default_json.

      See also

      JSON - Core level JSON type

      Changed in version 1.1: JSON is now a PostgreSQL- specific specialization of the new type.

      Members

      astext, , comparator_factory

      Class signature

      class (sqlalchemy.types.JSON)

      • class Comparator

        Define comparison operations for .

        Class signature

        class sqlalchemy.dialects.postgresql.JSON.Comparator (sqlalchemy.types.Comparator)

        • attribute astext

          On an indexed expression, use the “astext” (e.g. “->>”) conversion when rendered in SQL.

          E.g.:

          1. select(data_table.c.data['some key'].astext)

          See also

          ColumnElement.cast()

      • method __init__(none_as_null=False, astext_type=None)

        Construct a JSON type.

        • Parameters:

          • none_as_null

            if True, persist the value None as a SQL NULL value, not the JSON encoding of null. Note that when this flag is False, the construct can still be used to persist a NULL value:

            Changed in version 0.9.8: - Added none_as_null, and null() is now supported in order to persist a NULL value.

            See also

          • astext_type

            the type to use for the Comparator.astext accessor on indexed attributes. Defaults to .

            New in version 1.1.

      class sqlalchemy.dialects.postgresql.JSONB

      Represent the PostgreSQL JSONB type.

      The JSONB type stores arbitrary JSONB format data, e.g.:

      1. data_table = Table('data_table', metadata,
      2. Column('id', Integer, primary_key=True),
      3. Column('data', JSONB)
      4. )
      5. with engine.connect() as conn:
      6. conn.execute(
      7. data_table.insert(),
      8. data = {"key1": "value1", "key2": "value2"}
      9. )

      The type includes all operations provided by JSON, including the same behaviors for indexing operations. It also adds additional operators specific to JSONB, including , Comparator.has_all(), , Comparator.contains(), , Comparator.delete_path(), and Comparator.path_match().

      Like the type, the JSONB type does not detect in-place changes when used with the ORM, unless the extension is used.

      Custom serializers and deserializers are shared with the JSON class, using the json_serializer and json_deserializer keyword arguments. These must be specified at the dialect level using . When using psycopg2, the serializers are associated with the jsonb type using psycopg2.extras.register_default_jsonb on a per-connection basis, in the same way that psycopg2.extras.register_default_json is used to register these handlers with the json type.

      New in version 0.9.7.

      See also

      JSON

      Members

      , contains(), , has_all(), , has_key(), , path_match(),

      Class signature

      class sqlalchemy.dialects.postgresql.JSONB ()

      • class Comparator

        Define comparison operations for JSON.

        Class signature

        class (sqlalchemy.dialects.postgresql.json.Comparator)

        • method sqlalchemy.dialects.postgresql.JSONB.Comparator.contained_by(other)

          Boolean expression. Test if keys are a proper subset of the keys of the argument jsonb expression.

        • method contains(other, **kwargs)

          Boolean expression. Test if keys (or array) are a superset of/contained the keys of the argument jsonb expression.

          kwargs may be ignored by this operator but are required for API conformance.

        • method sqlalchemy.dialects.postgresql.JSONB.Comparator.delete_path(array)

          JSONB expression. Deletes field or array element specified in the argument array.

          The input may be a list of strings that will be coerced to an ARRAY or an instance of _postgres.array().

          New in version 2.0.

        • method has_all(other)

          Boolean expression. Test for presence of all keys in jsonb

        • method sqlalchemy.dialects.postgresql.JSONB.Comparator.has_any(other)

          Boolean expression. Test for presence of any key in jsonb

        • method has_key(other)

          Boolean expression. Test for presence of a key. Note that the key may be a SQLA expression.

        • method sqlalchemy.dialects.postgresql.JSONB.Comparator.path_exists(other)

          Boolean expression. Test for presence of item given by the argument JSONPath expression.

          New in version 2.0.

        • method path_match(other)

          Boolean expression. Test if JSONPath predicate given by the argument JSONPath expression matches.

          Only the first item of the result is taken into account.

          New in version 2.0.

      • attribute sqlalchemy.dialects.postgresql.JSONB.comparator_factory

        alias of

      class sqlalchemy.dialects.postgresql.JSONPATH

      JSON Path Type.

      This is usually required to cast literal values to json path when using json search like function, such as jsonb_path_query_array or jsonb_path_exists:

      1. stmt = sa.select(
      2. sa.func.jsonb_path_query_array(
      3. table.c.jsonb_col, cast("$.address.id", JSONPATH)
      4. )
      5. )

      Class signature

      class sqlalchemy.dialects.postgresql.JSONPATH (sqlalchemy.dialects.postgresql.json.JSONPathType)

      class sqlalchemy.dialects.postgresql.MACADDR

      Class signature

      class (sqlalchemy.types.TypeEngine)

      class sqlalchemy.dialects.postgresql.MACADDR8

      Class signature

      class (sqlalchemy.types.TypeEngine)

      class sqlalchemy.dialects.postgresql.MONEY

      Provide the PostgreSQL MONEY type.

      Depending on driver, result rows using this type may return a string value which includes currency symbols.

      For this reason, it may be preferable to provide conversion to a numerically-based currency datatype using :

      1. import re
      2. import decimal
      3. from sqlalchemy import TypeDecorator
      4. class NumericMoney(TypeDecorator):
      5. impl = MONEY
      6. def process_result_value(self, value: Any, dialect: Any) -> None:
      7. if value is not None:
      8. # adjust this for the currency and numeric
      9. m = re.match(r"\$([\d.]+)", value)
      10. if m:
      11. value = decimal.Decimal(m.group(1))
      12. return value

      Alternatively, the conversion may be applied as a CAST using the TypeDecorator.column_expression() method as follows:

      1. import decimal
      2. from sqlalchemy import cast
      3. from sqlalchemy import TypeDecorator
      4. class NumericMoney(TypeDecorator):
      5. impl = MONEY
      6. def column_expression(self, column: Any):
      7. return cast(column, Numeric())

      New in version 1.2.

      Class signature

      class (sqlalchemy.types.TypeEngine)

      class sqlalchemy.dialects.postgresql.OID

      Provide the PostgreSQL OID type.

      New in version 0.9.5.

      Class signature

      class (sqlalchemy.types.TypeEngine)

      class sqlalchemy.dialects.postgresql.REAL

      The SQL REAL type.

      See also

      - documentation for the base type.

      Class signature

      class sqlalchemy.dialects.postgresql.REAL (sqlalchemy.types.Float)

      • method __init__(precision: Optional[int] = None, asdecimal: bool = False, decimal_return_scale: Optional[int] = None)

        inherited from the sqlalchemy.types.Float.__init__ method of Float

        Construct a Float.

        • Parameters:

          • precision

            the numeric precision for use in DDL CREATE TABLE. Backends should attempt to ensure this precision indicates a number of digits for the generic datatype.

            Note

            For the Oracle backend, the Float.precision parameter is not accepted when rendering DDL, as Oracle does not support float precision specified as a number of decimal places. Instead, use the Oracle-specific datatype and specify the FLOAT.binary_precision parameter. This is new in version 2.0 of SQLAlchemy.

            To create a database agnostic that separately specifies binary precision for Oracle, use TypeEngine.with_variant() as follows:

            1. from sqlalchemy import Column
            2. from sqlalchemy import Float
            3. from sqlalchemy.dialects import oracle
            4. Column(
            5. "float_data",
            6. Float(5).with_variant(oracle.FLOAT(binary_precision=16), "oracle")
            7. )
          • asdecimal – the same flag as that of , but defaults to False. Note that setting this flag to True results in floating point conversion.

          • decimal_return_scale

            Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specifying this value will override that length. Note that the MySQL float types, which do include “scale”, will use “scale” as the default for decimal_return_scale, if not otherwise specified.

            New in version 0.9.0.

      class sqlalchemy.dialects.postgresql.REGCONFIG

      Provide the PostgreSQL REGCONFIG type.

      New in version 2.0.0rc1.

      Class signature

      class sqlalchemy.dialects.postgresql.REGCONFIG ()

      class sqlalchemy.dialects.postgresql.REGCLASS

      Provide the PostgreSQL REGCLASS type.

      New in version 1.2.7.

      Class signature

      class sqlalchemy.dialects.postgresql.REGCLASS ()

      class sqlalchemy.dialects.postgresql.TIMESTAMP

      Provide the PostgreSQL TIMESTAMP type.

      Members

      __init__()

      Class signature

      class (sqlalchemy.types.TIMESTAMP)

      • method __init__(timezone=False, precision=None)

        Construct a TIMESTAMP.

        • Parameters:

          • timezone – boolean value if timezone present, default False

          • precision

            optional integer precision value

            New in version 1.4.

      class sqlalchemy.dialects.postgresql.TIME

      PostgreSQL TIME type.

      Members

      __init__()

      class (sqlalchemy.types.TIME)

      • method __init__(timezone=False, precision=None)

        Construct a TIME.

        • Parameters:

          • timezone – boolean value if timezone present, default False

          • precision

            optional integer precision value

            New in version 1.4.

      class sqlalchemy.dialects.postgresql.TSQUERY

      Provide the PostgreSQL TSQUERY type.

      New in version 2.0.0rc1.

      Class signature

      class sqlalchemy.dialects.postgresql.TSQUERY ()

      class sqlalchemy.dialects.postgresql.TSVECTOR

      The TSVECTOR type implements the PostgreSQL text search type TSVECTOR.

      It can be used to do full text queries on natural language documents.

      New in version 0.9.0.

      See also

      Class signature

      class sqlalchemy.dialects.postgresql.TSVECTOR ()

      class sqlalchemy.dialects.postgresql.UUID

      Represent the SQL UUID type.

      This is the SQL-native form of the Uuid database agnostic datatype, and is backwards compatible with the previous PostgreSQL-only version of UUID.

      The datatype only works on databases that have a SQL datatype named UUID. It will not function for backends which don’t have this exact-named type, including SQL Server. For backend-agnostic UUID values with native support, including for SQL Server’s UNIQUEIDENTIFIER datatype, use the Uuid datatype.

      New in version 2.0.

      See also

      Class signature

      class sqlalchemy.dialects.postgresql.UUID (sqlalchemy.types.Uuid)

      • method __init__(as_uuid: bool = True)

        Construct a UUID type.

        • Parameters:

          as_uuid=True

          if True, values will be interpreted as Python uuid objects, converting to/from string via the DBAPI.

      class sqlalchemy.dialects.postgresql.INT4RANGE

      Represent the PostgreSQL INT4RANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractRange)

      class sqlalchemy.dialects.postgresql.INT8RANGE

      Represent the PostgreSQL INT8RANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractRange)

      class sqlalchemy.dialects.postgresql.NUMRANGE

      Represent the PostgreSQL NUMRANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractRange)

      class sqlalchemy.dialects.postgresql.DATERANGE

      Represent the PostgreSQL DATERANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractRange)

      class sqlalchemy.dialects.postgresql.TSRANGE

      Represent the PostgreSQL TSRANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractRange)

      class sqlalchemy.dialects.postgresql.TSTZRANGE

      Represent the PostgreSQL TSTZRANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractRange)

      class sqlalchemy.dialects.postgresql.INT4MULTIRANGE

      Represent the PostgreSQL INT4MULTIRANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractMultiRange)

      class sqlalchemy.dialects.postgresql.INT8MULTIRANGE

      Represent the PostgreSQL INT8MULTIRANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractMultiRange)

      class sqlalchemy.dialects.postgresql.NUMMULTIRANGE

      Represent the PostgreSQL NUMMULTIRANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractMultiRange)

      class sqlalchemy.dialects.postgresql.DATEMULTIRANGE

      Represent the PostgreSQL DATEMULTIRANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractMultiRange)

      class sqlalchemy.dialects.postgresql.TSMULTIRANGE

      Represent the PostgreSQL TSRANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractMultiRange)

      class sqlalchemy.dialects.postgresql.TSTZMULTIRANGE

      Represent the PostgreSQL TSTZRANGE type.

      Class signature

      class (sqlalchemy.dialects.postgresql.ranges.AbstractMultiRange)

      PostgreSQL SQL Elements and Functions

      class sqlalchemy.dialects.postgresql.aggregate_order_by

      Represent a PostgreSQL aggregate order by expression.

      E.g.:

      1. from sqlalchemy.dialects.postgresql import aggregate_order_by
      2. expr = func.array_agg(aggregate_order_by(table.c.a, table.c.b.desc()))
      3. stmt = select(expr)

      would represent the expression:

      1. SELECT array_agg(a ORDER BY b DESC) FROM table;

      Similarly:

      1. expr = func.string_agg(
      2. table.c.a,
      3. aggregate_order_by(literal_column("','"), table.c.a)
      4. )
      5. stmt = select(expr)

      Would represent:

      1. SELECT string_agg(a, ',' ORDER BY a) FROM table;

      New in version 1.1.

      Changed in version 1.2.13: - the ORDER BY argument may be multiple terms

      See also

      array_agg

      Class signature

      class (sqlalchemy.sql.expression.ColumnElement)

      class sqlalchemy.dialects.postgresql.array

      A PostgreSQL ARRAY literal.

      This is used to produce ARRAY literals in SQL expressions, e.g.:

      1. from sqlalchemy.dialects.postgresql import array
      2. from sqlalchemy.dialects import postgresql
      3. from sqlalchemy import select, func
      4. stmt = select(array([1,2]) + array([3,4,5]))
      5. print(stmt.compile(dialect=postgresql.dialect()))

      Produces the SQL:

      1. SELECT ARRAY[%(param_1)s, %(param_2)s] ||
      2. ARRAY[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1

      An instance of will always have the datatype ARRAY. The “inner” type of the array is inferred from the values present, unless the type_ keyword argument is passed:

      1. array(['foo', 'bar'], type_=CHAR)

      Multidimensional arrays are produced by nesting constructs. The dimensionality of the final ARRAY type is calculated by recursively adding the dimensions of the inner type:

      1. stmt = select(
      2. array([
      3. array([1, 2]), array([3, 4]), array([column('q'), column('x')])
      4. ])
      5. )
      6. print(stmt.compile(dialect=postgresql.dialect()))

      Produces:

      1. SELECT ARRAY[ARRAY[%(param_1)s, %(param_2)s],
      2. ARRAY[%(param_3)s, %(param_4)s], ARRAY[q, x]] AS anon_1

      New in version 1.3.6: added support for multidimensional array literals

      See also

      ARRAY

      Class signature

      class (sqlalchemy.sql.expression.ExpressionClauseList)

      function sqlalchemy.dialects.postgresql.array_agg(*arg, **kw)

      PostgreSQL-specific form of array_agg, ensures return type is and not the plain ARRAY, unless an explicit type_ is passed.

      New in version 1.1.

      function sqlalchemy.dialects.postgresql.Any(other, arrexpr, operator=<built-in function eq>)

      A synonym for the ARRAY-level method. See that method for details.

      function sqlalchemy.dialects.postgresql.All(other, arrexpr, operator=<built-in function eq>)

      A synonym for the ARRAY-level Comparator.all() method. See that method for details.

      class sqlalchemy.dialects.postgresql.hstore

      Construct an hstore value within a SQL expression using the PostgreSQL hstore() function.

      The function accepts one or two arguments as described in the PostgreSQL documentation.

      E.g.:

      1. from sqlalchemy.dialects.postgresql import array, hstore
      2. select(hstore('key1', 'value1'))
      3. select(
      4. hstore(
      5. array(['key1', 'key2', 'key3']),
      6. array(['value1', 'value2', 'value3'])
      7. )
      8. )

      See also

      HSTORE - the PostgreSQL HSTORE datatype.

      Members

      , type

      Class signature

      class (sqlalchemy.sql.functions.GenericFunction)

      • attribute inherit_cache: Optional[bool] = True

        Indicate if this HasCacheKey instance should make use of the cache key generation scheme used by its immediate superclass.

        The attribute defaults to None, which indicates that a construct has not yet taken into account whether or not its appropriate for it to participate in caching; this is functionally equivalent to setting the value to False, except that a warning is also emitted.

        This flag can be set to True on a particular class, if the SQL that corresponds to the object does not change based on attributes which are local to this class, and not its superclass.

        See also

        - General guideslines for setting the HasCacheKey.inherit_cache attribute for third-party or user defined SQL constructs.

      • attribute type

        alias of HSTORE

      class sqlalchemy.dialects.postgresql.to_tsvector

      The PostgreSQL to_tsvector SQL function.

      This function applies automatic casting of the REGCONFIG argument to use the datatype automatically, and applies a return type of TSVECTOR.

      Assuming the PostgreSQL dialect has been imported, either by invoking from sqlalchemy.dialects import postgresql, or by creating a PostgreSQL engine using create_engine("postgresql..."), will be used automatically when invoking sqlalchemy.func.to_tsvector(), ensuring the correct argument and return type handlers are used at compile and execution time.

      New in version 2.0.0rc1.

      Class signature

      class sqlalchemy.dialects.postgresql.to_tsvector (sqlalchemy.dialects.postgresql.ext._regconfig_fn)

      class sqlalchemy.dialects.postgresql.to_tsquery

      The PostgreSQL to_tsquery SQL function.

      This function applies automatic casting of the REGCONFIG argument to use the datatype automatically, and applies a return type of TSQUERY.

      Assuming the PostgreSQL dialect has been imported, either by invoking from sqlalchemy.dialects import postgresql, or by creating a PostgreSQL engine using create_engine("postgresql..."), will be used automatically when invoking sqlalchemy.func.to_tsquery(), ensuring the correct argument and return type handlers are used at compile and execution time.

      New in version 2.0.0rc1.

      Class signature

      class sqlalchemy.dialects.postgresql.to_tsquery (sqlalchemy.dialects.postgresql.ext._regconfig_fn)

      class sqlalchemy.dialects.postgresql.plainto_tsquery

      The PostgreSQL plainto_tsquery SQL function.

      This function applies automatic casting of the REGCONFIG argument to use the datatype automatically, and applies a return type of TSQUERY.

      Assuming the PostgreSQL dialect has been imported, either by invoking from sqlalchemy.dialects import postgresql, or by creating a PostgreSQL engine using create_engine("postgresql..."), will be used automatically when invoking sqlalchemy.func.plainto_tsquery(), ensuring the correct argument and return type handlers are used at compile and execution time.

      New in version 2.0.0rc1.

      Class signature

      class sqlalchemy.dialects.postgresql.plainto_tsquery (sqlalchemy.dialects.postgresql.ext._regconfig_fn)

      class sqlalchemy.dialects.postgresql.phraseto_tsquery

      The PostgreSQL phraseto_tsquery SQL function.

      This function applies automatic casting of the REGCONFIG argument to use the datatype automatically, and applies a return type of TSQUERY.

      Assuming the PostgreSQL dialect has been imported, either by invoking from sqlalchemy.dialects import postgresql, or by creating a PostgreSQL engine using create_engine("postgresql..."), will be used automatically when invoking sqlalchemy.func.phraseto_tsquery(), ensuring the correct argument and return type handlers are used at compile and execution time.

      New in version 2.0.0rc1.

      Class signature

      class sqlalchemy.dialects.postgresql.phraseto_tsquery (sqlalchemy.dialects.postgresql.ext._regconfig_fn)

      class sqlalchemy.dialects.postgresql.websearch_to_tsquery

      The PostgreSQL websearch_to_tsquery SQL function.

      This function applies automatic casting of the REGCONFIG argument to use the datatype automatically, and applies a return type of TSQUERY.

      Assuming the PostgreSQL dialect has been imported, either by invoking from sqlalchemy.dialects import postgresql, or by creating a PostgreSQL engine using create_engine("postgresql..."), will be used automatically when invoking sqlalchemy.func.websearch_to_tsquery(), ensuring the correct argument and return type handlers are used at compile and execution time.

      New in version 2.0.0rc1.

      Class signature

      class sqlalchemy.dialects.postgresql.websearch_to_tsquery (sqlalchemy.dialects.postgresql.ext._regconfig_fn)

      class sqlalchemy.dialects.postgresql.ts_headline

      The PostgreSQL ts_headline SQL function.

      This function applies automatic casting of the REGCONFIG argument to use the datatype automatically, and applies a return type of TEXT.

      Assuming the PostgreSQL dialect has been imported, either by invoking from sqlalchemy.dialects import postgresql, or by creating a PostgreSQL engine using create_engine("postgresql..."), will be used automatically when invoking sqlalchemy.func.ts_headline(), ensuring the correct argument and return type handlers are used at compile and execution time.

      New in version 2.0.0rc1.

      Class signature

      class sqlalchemy.dialects.postgresql.ts_headline (sqlalchemy.dialects.postgresql.ext._regconfig_fn)

      PostgreSQL Constraint Types

      SQLAlchemy supports PostgreSQL EXCLUDE constraints via the ExcludeConstraint class:

      Object NameDescription

      A table-level EXCLUDE constraint.

      class sqlalchemy.dialects.postgresql.ExcludeConstraint

      A table-level EXCLUDE constraint.

      Defines an EXCLUDE constraint as described in the PostgreSQL documentation.

      Members

      Class signature

      class sqlalchemy.dialects.postgresql.ExcludeConstraint ()

      • method sqlalchemy.dialects.postgresql.ExcludeConstraint.__init__(*elements, **kw)

        Create an object.

        E.g.:

        1. const = ExcludeConstraint(
        2. (Column('period'), '&&'),
        3. (Column('group'), '='),
        4. where=(Column('group') != 'some group'),
        5. ops={'group': 'my_operator_class'}
        6. )

        The constraint is normally embedded into the Table construct directly, or added later using :

        1. some_table = Table(
        2. 'some_table', metadata,
        3. Column('id', Integer, primary_key=True),
        4. Column('period', TSRANGE()),
        5. Column('group', String)
        6. )
        7. some_table.append_constraint(
        8. ExcludeConstraint(
        9. (some_table.c.period, '&&'),
        10. (some_table.c.group, '='),
        11. where=some_table.c.group != 'some group',
        12. name='some_table_excl_const',
        13. ops={'group': 'my_operator_class'}
        14. )
        15. )
        • Parameters:

          • *elements – A sequence of two tuples of the form (column, operator) where “column” is a SQL expression element or a raw SQL string, most typically a Column object, and “operator” is a string containing the operator to use. In order to specify a column name when a object is not available, while ensuring that any necessary quoting rules take effect, an ad-hoc Column or object should be used.

          • name – Optional, the in-database name of this constraint.

          • deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.

          • initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.

          • using – Optional string. If set, emit USING <index_method> when issuing DDL for this constraint. Defaults to ‘gist’.

          • where

            Optional SQL expression construct or literal SQL string. If set, emit WHERE <predicate> when issuing DDL for this constraint.

            Warning

            The ExcludeConstraint.where argument to can be passed as a Python string argument, which will be treated as trusted SQL text and rendered as given. DO NOT PASS UNTRUSTED INPUT TO THIS PARAMETER.

          • ops

            Optional dictionary. Used to define operator classes for the elements; works the same way as that of the postgresql_ops parameter specified to the construct.

            New in version 1.3.21.

            See also

            Operator Classes - general description of how PostgreSQL operator classes are specified.

      For example:

      1. from sqlalchemy.dialects.postgresql import ExcludeConstraint, TSRANGE
      2. class RoomBooking(Base):
      3. __tablename__ = "room_booking"
      4. room = Column(Integer(), primary_key=True)
      5. during = Column(TSRANGE())
      6. __table_args__ = (ExcludeConstraint(("room", "="), ("during", "&&")),)

      PostgreSQL DML Constructs

      function sqlalchemy.dialects.postgresql.insert(table)

      Construct a PostgreSQL-specific variant Insert construct.

      The function creates a sqlalchemy.dialects.postgresql.Insert. This class is based on the dialect-agnostic construct which may be constructed using the insert() function in SQLAlchemy Core.

      The construct includes additional methods Insert.on_conflict_do_update(), .

      class sqlalchemy.dialects.postgresql.Insert

      PostgreSQL-specific implementation of INSERT.

      Adds methods for PG-specific syntaxes such as ON CONFLICT.

      The Insert object is created using the function.

      New in version 1.1.

      Members

      excluded, , on_conflict_do_nothing(),

      Class signature

      class sqlalchemy.dialects.postgresql.Insert ()

      • attribute sqlalchemy.dialects.postgresql.Insert.excluded

        Provide the excluded namespace for an ON CONFLICT statement

        PG’s ON CONFLICT clause allows reference to the row that would be inserted, known as excluded. This attribute provides all columns in this row to be referenceable.

        Tip

        The attribute is an instance of ColumnCollection, which provides an interface the same as that of the collection described at Accessing Tables and Columns. With this collection, ordinary names are accessible like attributes (e.g. stmt.excluded.some_column), but special names and dictionary method names should be accessed using indexed access, such as stmt.excluded["column name"] or stmt.excluded["values"]. See the docstring for for further examples.

        See also

        INSERT…ON CONFLICT (Upsert) - example of how to use Insert.excluded

      • attribute inherit_cache: Optional[bool] = False

        Indicate if this HasCacheKey instance should make use of the cache key generation scheme used by its immediate superclass.

        The attribute defaults to None, which indicates that a construct has not yet taken into account whether or not its appropriate for it to participate in caching; this is functionally equivalent to setting the value to False, except that a warning is also emitted.

        This flag can be set to True on a particular class, if the SQL that corresponds to the object does not change based on attributes which are local to this class, and not its superclass.

        See also

        - General guideslines for setting the HasCacheKey.inherit_cache attribute for third-party or user defined SQL constructs.

      • method on_conflict_do_nothing(constraint=None, index_elements=None, index_where=None) → SelfInsert

        Specifies a DO NOTHING action for ON CONFLICT clause.

        The constraint and index_elements arguments are optional, but only one of these can be specified.

        • Parameters:

          • constraint – The name of a unique or exclusion constraint on the table, or the constraint object itself if it has a .name attribute.

          • index_elements – A sequence consisting of string column names, Column objects, or other column expression objects that will be used to infer a target index.

          • index_where

            Additional WHERE criterion that can be used to infer a conditional target index.

            New in version 1.1.

      1. See also
      2. [INSERTON CONFLICT (Upsert)](#postgresql-insert-on-conflict)
      • method on_conflict_do_update(constraint=None, index_elements=None, index_where=None, set\=None, _where=None) → SelfInsert

        Specifies a DO UPDATE SET action for ON CONFLICT clause.

        Either the constraint or index_elements argument is required, but only one of these can be specified.

        • Parameters:

          • constraint – The name of a unique or exclusion constraint on the table, or the constraint object itself if it has a .name attribute.

          • index_elements – A sequence consisting of string column names, Column objects, or other column expression objects that will be used to infer a target index.

          • index_where – Additional WHERE criterion that can be used to infer a conditional target index.

          • set_

            A dictionary or other mapping object where the keys are either names of columns in the target table, or objects or other ORM-mapped columns matching that of the target table, and expressions or literals as values, specifying the SET actions to take.

            New in version 1.4: The Insert.on_conflict_do_update.set_ parameter supports objects from the target Table as keys.

            Warning

            This dictionary does not take into account Python-specified default UPDATE values or generation functions, e.g. those specified using . These values will not be exercised for an ON CONFLICT style of UPDATE, unless they are manually specified in the Insert.on_conflict_do_update.set_ dictionary.

          • where

            Optional argument. If present, can be a literal SQL string or an acceptable expression for a WHERE clause that restricts the rows affected by DO UPDATE SET. Rows not meeting the WHERE condition will not be updated (effectively a DO NOTHING for those rows).

            New in version 1.1.

      1. See also
      2. [INSERTON CONFLICT (Upsert)](#postgresql-insert-on-conflict)

      psycopg2

      Support for the PostgreSQL database via the psycopg2 driver.

      DBAPI

      Documentation and download information (if applicable) for psycopg2 is available at:

      Connecting

      Connect String:

      1. postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...]

      psycopg2 Connect Arguments

      Keyword arguments that are specific to the SQLAlchemy psycopg2 dialect may be passed to create_engine(), and include the following:

      • isolation_level: This option, available for all PostgreSQL dialects, includes the AUTOCOMMIT isolation level when using the psycopg2 dialect. This option sets the default isolation level for the connection that is set immediately upon connection to the database before the connection is pooled. This option is generally superseded by the more modern execution option, detailed at Setting Transaction Isolation Levels including DBAPI Autocommit.

        See also

        Setting Transaction Isolation Levels including DBAPI Autocommit

      • client_encoding: sets the client encoding in a libpq-agnostic way, using psycopg2’s set_client_encoding() method.

        See also

      • executemany_mode, executemany_batch_page_size, executemany_values_page_size: Allows use of psycopg2 extensions for optimizing “executemany”-style queries. See the referenced section below for details.

        See also

        Psycopg2 Fast Execution Helpers

      Tip

      The above keyword arguments are dialect keyword arguments, meaning that they are passed as explicit keyword arguments to :

      1. engine = create_engine(
      2. "postgresql+psycopg2://scott:tiger@localhost/test",
      3. isolation_level="SERIALIZABLE",
      4. )

      These should not be confused with DBAPI connect arguments, which are passed as part of the create_engine.connect_args dictionary and/or are passed in the URL query string, as detailed in the section .

      SSL Connections

      The psycopg2 module has a connection argument named sslmode for controlling its behavior regarding secure (SSL) connections. The default is sslmode=prefer; it will attempt an SSL connection and if that fails it will fall back to an unencrypted connection. sslmode=require may be used to ensure that only secure connections are established. Consult the psycopg2 / libpq documentation for further options that are available.

      Note that sslmode is specific to psycopg2 so it is included in the connection URI:

      1. engine = sa.create_engine(
      2. "postgresql+psycopg2://scott:tiger@192.168.0.199:5432/test?sslmode=require"
      3. )

      Unix Domain Connections

      psycopg2 supports connecting via Unix domain connections. When the host portion of the URL is omitted, SQLAlchemy passes None to psycopg2, which specifies Unix-domain communication rather than TCP/IP communication:

      1. create_engine("postgresql+psycopg2://user:password@/dbname")

      By default, the socket file used is to connect to a Unix-domain socket in /tmp, or whatever socket directory was specified when PostgreSQL was built. This value can be overridden by passing a pathname to psycopg2, using host as an additional keyword argument:

      1. create_engine("postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql")

      See also

      PQconnectdbParams

      Specifying multiple fallback hosts

      psycopg2 supports multiple connection points in the connection string. When the host parameter is used multiple times in the query section of the URL, SQLAlchemy will create a single string of the host and port information provided to make the connections. Tokens may consist of host::port or just host; in the latter case, the default port is selected by libpq. In the example below, three host connections are specified, for HostA::PortA, HostB connecting to the default port, and HostC::PortC:

      1. create_engine(
      2. "postgresql+psycopg2://user:password@/dbname?host=HostA:PortA&host=HostB&host=HostC:PortC"
      3. )

      As an alternative, libpq query string format also may be used; this specifies host and port as single query string arguments with comma-separated lists - the default port can be chosen by indicating an empty value in the comma separated list:

      1. create_engine(
      2. "postgresql+psycopg2://user:password@/dbname?host=HostA,HostB,HostC&port=PortA,,PortC"
      3. )

      With either URL style, connections to each host is attempted based on a configurable strategy, which may be configured using the libpq target_session_attrs parameter. Per libpq this defaults to any which indicates a connection to each host is then attempted until a connection is successful. Other strategies include primary, prefer-standby, etc. The complete list is documented by PostgreSQL at libpq connection strings.

      For example, to indicate two hosts using the primary strategy:

      1. create_engine(
      2. "postgresql+psycopg2://user:password@/dbname?host=HostA:PortA&host=HostB&host=HostC:PortC&target_session_attrs=primary"
      3. )

      Changed in version 1.4.40: Port specification in psycopg2 multiple host format is repaired, previously ports were not correctly interpreted in this context. libpq comma-separated format is also now supported.

      New in version 1.3.20: Support for multiple hosts in PostgreSQL connection string.

      See also

      - please refer to this section in the libpq documentation for complete background on multiple host support.

      Empty DSN Connections / Environment Variable Connections

      The psycopg2 DBAPI can connect to PostgreSQL by passing an empty DSN to the libpq client library, which by default indicates to connect to a localhost PostgreSQL database that is open for “trust” connections. This behavior can be further tailored using a particular set of environment variables which are prefixed with PG_..., which are consumed by libpq to take the place of any or all elements of the connection string.

      For this form, the URL can be passed without any elements other than the initial scheme:

      1. engine = create_engine('postgresql+psycopg2://')

      In the above form, a blank “dsn” string is passed to the psycopg2.connect() function which in turn represents an empty DSN passed to libpq.

      New in version 1.3.2: support for parameter-less connections with psycopg2.

      See also

      - PostgreSQL documentation on how to use PG_... environment variables for connections.

      Per-Statement/Connection Execution Options

      The following DBAPI-specific options are respected when used with , Executable.execution_options(), , in addition to those not specific to DBAPIs:

      • isolation_level - Set the transaction isolation level for the lifespan of a Connection (can only be set on a connection, not a statement or query). See .

      • stream_results - Enable or disable usage of psycopg2 server side cursors - this feature makes use of “named” cursors in combination with special result handling methods so that result rows are not fully buffered. Defaults to False, meaning cursors are buffered by default.

      • max_row_buffer - when using stream_results, an integer value that specifies the maximum number of rows to buffer at a time. This is interpreted by the BufferedRowCursorResult, and if omitted the buffer will grow to ultimately store 1000 rows at a time.

        Changed in version 1.4: The max_row_buffer size can now be greater than 1000, and the buffer will grow to that size.

      Psycopg2 Fast Execution Helpers

      Modern versions of psycopg2 include a feature known as , which have been shown in benchmarking to improve psycopg2’s executemany() performance, primarily with INSERT statements, by at least an order of magnitude.

      SQLAlchemy implements a native form of the “insert many values” handler that will rewrite a single-row INSERT statement to accommodate for many values at once within an extended VALUES clause; this handler is equivalent to psycopg2’s execute_values() handler; an overview of this feature and its configuration are at “Insert Many Values” Behavior for INSERT statements.

      New in version 2.0: Replaced psycopg2’s execute_values() fast execution helper with a native SQLAlchemy mechanism referred towards as .

      The psycopg2 dialect retains the ability to use the psycopg2-specific execute_batch() feature, although it is not expected that this is a widely used feature. The use of this extension may be enabled using the executemany_mode flag which may be passed to create_engine():

      1. engine = create_engine(
      2. "postgresql+psycopg2://scott:tiger@host/dbname",
      3. executemany_mode='values_plus_batch')

      Possible options for executemany_mode include:

      • values_only - this is the default value. SQLAlchemy’s native handler is used for qualifying INSERT statements, assuming create_engine.use_insertmanyvalues is left at its default value of True. This handler rewrites simple INSERT statements to include multiple VALUES clauses so that many parameter sets can be inserted with one statement.

      • 'values_plus_batch'- SQLAlchemy’s native handler is used for qualifying INSERT statements, assuming create_engine.use_insertmanyvalues is left at its default value of True. Then, psycopg2’s execute_batch() handler is used for qualifying UPDATE and DELETE statements when executed with multiple parameter sets. When using this mode, the attribute will not contain a value for executemany-style executions against UPDATE and DELETE statements.

      Changed in version 2.0: Removed the 'batch' and 'None' options from psycopg2 executemany_mode. Control over batching for INSERT statements is now configured via the create_engine.use_insertmanyvalues engine-level parameter.

      The term “qualifying statements” refers to the statement being executed being a Core , update() or construct, and not a plain textual SQL string or one constructed using text(). It also may not be a special “extension” statement such as an “ON CONFLICT” “upsert” statement. When using the ORM, all insert/update/delete statements used by the ORM flush process are qualifying.

      The “page size” for the psycopg2 “batch” strategy can be affected by using the executemany_batch_page_size parameter, which defaults to 100.

      For the “insertmanyvalues” feature, the page size can be controlled using the parameter, which defaults to 1000. An example of modifying both parameters is below:

      1. engine = create_engine(
      2. "postgresql+psycopg2://scott:tiger@host/dbname",
      3. executemany_mode='values_plus_batch',
      4. insertmanyvalues_page_size=5000, executemany_batch_page_size=500)

      See also

      “Insert Many Values” Behavior for INSERT statements - background on “insertmanyvalues”

      - General information on using the Connection object to execute statements in such a way as to make use of the DBAPI .executemany() method.

      Unicode with Psycopg2

      The psycopg2 DBAPI driver supports Unicode data transparently.

      The client character encoding can be controlled for the psycopg2 dialect in the following ways:

      • For PostgreSQL 9.1 and above, the client_encoding parameter may be passed in the database URL; this parameter is consumed by the underlying libpq PostgreSQL client library:

        1. engine = create_engine("postgresql+psycopg2://user:pass@host/dbname?client_encoding=utf8")

        Alternatively, the above client_encoding value may be passed using create_engine.connect_args for programmatic establishment with libpq:

        1. engine = create_engine(
        2. "postgresql+psycopg2://user:pass@host/dbname",
        3. connect_args={'client_encoding': 'utf8'}
        4. )
      • For all PostgreSQL versions, psycopg2 supports a client-side encoding value that will be passed to database connections when they are first established. The SQLAlchemy psycopg2 dialect supports this using the client_encoding parameter passed to :

        1. engine = create_engine(
        2. "postgresql+psycopg2://user:pass@host/dbname",
        3. client_encoding="utf8"
        4. )

        Tip

        The above client_encoding parameter admittedly is very similar in appearance to usage of the parameter within the create_engine.connect_args dictionary; the difference above is that the parameter is consumed by psycopg2 and is passed to the database connection using SET client_encoding TO 'utf8'; in the previously mentioned style, the parameter is instead passed through psycopg2 and consumed by the libpq library.

      • A common way to set up client encoding with PostgreSQL databases is to ensure it is configured within the server-side postgresql.conf file; this is the recommended way to set encoding for a server that is consistently of one encoding in all databases:

        1. # postgresql.conf file
        2. # client_encoding = sql_ascii # actually, defaults to database
        3. # encoding
        4. client_encoding = utf8

      The psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.

      Psycopg2 Transaction Isolation Level

      As discussed in , all PostgreSQL dialects support setting of transaction isolation level both via the isolation_level parameter passed to create_engine() , as well as the isolation_level argument used by . When using the psycopg2 dialect , these options make use of psycopg2’s set_isolation_level() connection method, rather than emitting a PostgreSQL directive; this is because psycopg2’s API-level setting is always emitted at the start of each transaction in any case.

      The psycopg2 dialect supports these constants for isolation level:

      • READ COMMITTED

      • READ UNCOMMITTED

      • REPEATABLE READ

      • SERIALIZABLE

      • AUTOCOMMIT

      See also

      Transaction Isolation Level

      NOTICE logging

      The psycopg2 dialect will log PostgreSQL NOTICE messages via the sqlalchemy.dialects.postgresql logger. When this logger is set to the logging.INFO level, notice messages will be logged:

      1. import logging
      2. logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)

      Above, it is assumed that logging is configured externally. If this is not the case, configuration such as logging.basicConfig() must be utilized:

      1. import logging
      2. logging.basicConfig() # log messages to stdout
      3. logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)

      See also

      - on the python.org website

      HSTORE type

      The psycopg2 DBAPI includes an extension to natively handle marshalling of the HSTORE type. The SQLAlchemy psycopg2 dialect will enable this extension by default when psycopg2 version 2.4 or greater is used, and it is detected that the target database has the HSTORE type set up for use. In other words, when the dialect makes the first connection, a sequence like the following is performed:

      1. Request the available HSTORE oids using psycopg2.extras.HstoreAdapter.get_oids(). If this function returns a list of HSTORE identifiers, we then determine that the HSTORE extension is present. This function is skipped if the version of psycopg2 installed is less than version 2.4.

      2. If the use_native_hstore flag is at its default of True, and we’ve detected that HSTORE oids are available, the psycopg2.extensions.register_hstore() extension is invoked for all connections.

      The register_hstore() extension has the effect of all Python dictionaries being accepted as parameters regardless of the type of target column in SQL. The dictionaries are converted by this extension into a textual HSTORE expression. If this behavior is not desired, disable the use of the hstore extension by setting use_native_hstore to False as follows:

      1. engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test",
      2. use_native_hstore=False)

      The HSTORE type is still supported when the psycopg2.extensions.register_hstore() extension is not used. It merely means that the coercion between Python dictionaries and the HSTORE string format, on both the parameter side and the result side, will take place within SQLAlchemy’s own marshalling logic, and not that of psycopg2 which may be more performant.

      psycopg

      Support for the PostgreSQL database via the psycopg (a.k.a. psycopg 3) driver.

      DBAPI

      Documentation and download information (if applicable) for psycopg (a.k.a. psycopg 3) is available at:

      Connecting

      Connect String:

      1. postgresql+psycopg://user:password@host:port/dbname[?key=value&key=value...]

      psycopg is the package and module name for version 3 of the psycopg database driver, formerly known as psycopg2. This driver is different enough from its psycopg2 predecessor that SQLAlchemy supports it via a totally separate dialect; support for psycopg2 is expected to remain for as long as that package continues to function for modern Python versions, and also remains the default dialect for the postgresql:// dialect series.

      The SQLAlchemy psycopg dialect provides both a sync and an async implementation under the same dialect name. The proper version is selected depending on how the engine is created:

      • calling with postgresql+psycopg://... will automatically select the sync version, e.g.:

        1. from sqlalchemy import create_engine
        2. sync_engine = create_engine("postgresql+psycopg://scott:tiger@localhost/test")
      • calling create_async_engine() with postgresql+psycopg://... will automatically select the async version, e.g.:

        1. from sqlalchemy.ext.asyncio import create_async_engine
        2. asyncio_engine = create_async_engine("postgresql+psycopg://scott:tiger@localhost/test")

      The asyncio version of the dialect may also be specified explicitly using the psycopg_async suffix, as:

      1. from sqlalchemy.ext.asyncio import create_async_engine
      2. asyncio_engine = create_async_engine("postgresql+psycopg_async://scott:tiger@localhost/test")

      The psycopg dialect has the same API features as that of psycopg2, with the exception of the “fast executemany” helpers. The “fast executemany” helpers are expected to be generalized and ported to psycopg before the final release of SQLAlchemy 2.0, however.

      See also

      - The SQLAlchemy psycopg dialect shares most of its behavior with the psycopg2 dialect. Further documentation is available there.

      pg8000

      Support for the PostgreSQL database via the pg8000 driver.

      DBAPI

      Documentation and download information (if applicable) for pg8000 is available at: https://pypi.org/project/pg8000/

      Connecting

      Connect String:

      1. postgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...]

      Changed in version 1.4: The pg8000 dialect has been updated for version 1.16.6 and higher, and is again part of SQLAlchemy’s continuous integration with full feature support.

      Unicode

      pg8000 will encode / decode string values between it and the server using the PostgreSQL client_encoding parameter; by default this is the value in the postgresql.conf file, which often defaults to SQL_ASCII. Typically, this can be changed to utf-8, as a more useful default:

      1. #client_encoding = sql_ascii # actually, defaults to database
      2. # encoding
      3. client_encoding = utf8

      The client_encoding can be overridden for a session by executing the SQL:

      SET CLIENT_ENCODING TO ‘utf8’;

      SQLAlchemy will execute this SQL on all new connections based on the value passed to using the client_encoding parameter:

      1. engine = create_engine(
      2. "postgresql+pg8000://user:pass@host/dbname", client_encoding='utf8')

      SSL Connections

      pg8000 accepts a Python SSLContext object which may be specified using the dictionary:

      1. import ssl
      2. ssl_context = ssl.create_default_context()
      3. engine = sa.create_engine(
      4. "postgresql+pg8000://scott:tiger@192.168.0.199/test",
      5. connect_args={"ssl_context": ssl_context},
      6. )

      If the server uses an automatically-generated certificate that is self-signed or does not match the host name (as seen from the client), it may also be necessary to disable hostname checking:

      1. import ssl
      2. ssl_context = ssl.create_default_context()
      3. ssl_context.check_hostname = False
      4. ssl_context.verify_mode = ssl.CERT_NONE
      5. engine = sa.create_engine(
      6. "postgresql+pg8000://scott:tiger@192.168.0.199/test",
      7. connect_args={"ssl_context": ssl_context},
      8. )

      pg8000 Transaction Isolation Level

      The pg8000 dialect offers the same isolation level settings as that of the dialect:

      • READ COMMITTED

      • READ UNCOMMITTED

      • REPEATABLE READ

      • SERIALIZABLE

      • AUTOCOMMIT

      See also

      Transaction Isolation Level

      Support for the PostgreSQL database via the asyncpg driver.

      DBAPI

      Documentation and download information (if applicable) for asyncpg is available at: https://magicstack.github.io/asyncpg/

      Connecting

      Connect String:

      1. postgresql+asyncpg://user:password@host:port/dbname[?key=value&key=value...]

      The asyncpg dialect is SQLAlchemy’s first Python asyncio dialect.

      Using a special asyncio mediation layer, the asyncpg dialect is usable as the backend for the SQLAlchemy asyncio extension package.

      This dialect should normally be used only with the engine creation function:

      1. from sqlalchemy.ext.asyncio import create_async_engine
      2. engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname")

      The dialect can also be run as a “synchronous” dialect within the create_engine() function, which will pass “await” calls into an ad-hoc event loop. This mode of operation is of limited use and is for special testing scenarios only. The mode can be enabled by adding the SQLAlchemy-specific flag async_fallback to the URL in conjunction with :

      1. # for testing purposes only; do not use in production!
      2. engine = create_engine("postgresql+asyncpg://user:pass@hostname/dbname?async_fallback=true")

      New in version 1.4.

      Note

      By default asyncpg does not decode the json and jsonb types and returns them as strings. SQLAlchemy sets default type decoder for json and jsonb types using the python builtin json.loads function. The json implementation used can be changed by setting the attribute json_deserializer when creating the engine with create_engine() or create_async_engine().

      Prepared Statement Cache

      The asyncpg SQLAlchemy dialect makes use of asyncpg.connection.prepare() for all statements. The prepared statement objects are cached after construction which appears to grant a 10% or more performance improvement for statement invocation. The cache is on a per-DBAPI connection basis, which means that the primary storage for prepared statements is within DBAPI connections pooled within the connection pool. The size of this cache defaults to 100 statements per DBAPI connection and may be adjusted using the prepared_statement_cache_size DBAPI argument (note that while this argument is implemented by SQLAlchemy, it is part of the DBAPI emulation portion of the asyncpg dialect, therefore is handled as a DBAPI argument, not a dialect argument):

      1. engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=500")

      To disable the prepared statement cache, use a value of zero:

      1. engine = create_async_engine("postgresql+asyncpg://user:pass@hostname/dbname?prepared_statement_cache_size=0")

      New in version 1.4.0b2: Added prepared_statement_cache_size for asyncpg.

      Warning

      The asyncpg database driver necessarily uses caches for PostgreSQL type OIDs, which become stale when custom PostgreSQL datatypes such as ENUM objects are changed via DDL operations. Additionally, prepared statements themselves which are optionally cached by SQLAlchemy’s driver as described above may also become “stale” when DDL has been emitted to the PostgreSQL database which modifies the tables or other objects involved in a particular prepared statement.

      The SQLAlchemy asyncpg dialect will invalidate these caches within its local process when statements that represent DDL are emitted on a local connection, but this is only controllable within a single Python process / database engine. If DDL changes are made from other database engines and/or processes, a running application may encounter asyncpg exceptions InvalidCachedStatementError and/or InternalServerError("cache lookup failed for type <oid>") if it refers to pooled database connections which operated upon the previous structures. The SQLAlchemy asyncpg dialect will recover from these error cases when the driver raises these exceptions by clearing its internal caches as well as those of the asyncpg driver in response to them, but cannot prevent them from being raised in the first place if the cached prepared statement or asyncpg type caches have gone stale, nor can it retry the statement as the PostgreSQL transaction is invalidated when these errors occur.

      Disabling the PostgreSQL JIT to improve ENUM datatype handling

      Asyncpg has an issue when using PostgreSQL ENUM datatypes, where upon the creation of new database connections, an expensive query may be emitted in order to retrieve metadata regarding custom types which has been shown to negatively affect performance. To mitigate this issue, the PostgreSQL “jit” setting may be disabled from the client using this setting passed to :

      1. engine = create_async_engine(
      2. "postgresql+asyncpg://user:password@localhost/tmp",
      3. connect_args={"server_settings": {"jit": "off"}},

      See also

      https://github.com/MagicStack/asyncpg/issues/727

      psycopg2cffi

      Support for the PostgreSQL database via the psycopg2cffi driver.

      DBAPI

      Documentation and download information (if applicable) for psycopg2cffi is available at:

      Connect String:

      psycopg2cffi is an adaptation of , using CFFI for the C layer. This makes it suitable for use in e.g. PyPy. Documentation is as per psycopg2.

      New in version 1.0.0.

      See also