Microsoft SQL Server

    The following table summarizes current support levels for database release versions.

    The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.

    External Dialects

    In addition to the above DBAPI layers with native SQLAlchemy support, there are third-party dialects for other DBAPI layers that are compatible with SQL Server. See the “External Dialects” list on the Dialects page.

    Auto Increment Behavior / IDENTITY Columns

    SQL Server provides so-called “auto incrementing” behavior using the construct, which can be placed on any single integer column in a table. SQLAlchemy considers IDENTITY within its default “autoincrement” behavior for an integer primary key column, described at Column.autoincrement. This means that by default, the first integer primary key column in a will be considered to be the identity column - unless it is associated with a Sequence - and will generate DDL as such:

    The above example will generate DDL as:

    1. CREATE TABLE t (
    2. id INTEGER NOT NULL IDENTITY,
    3. x INTEGER NULL,
    4. PRIMARY KEY (id)
    5. )

    For the case where this default generation of IDENTITY is not desired, specify False for the flag, on the first integer primary key column:

    1. m = MetaData()
    2. t = Table('t', m,
    3. Column('id', Integer, primary_key=True, autoincrement=False),
    4. Column('x', Integer))
    5. m.create_all(engine)

    To add the IDENTITY keyword to a non-primary key column, specify True for the Column.autoincrement flag on the desired object, and ensure that Column.autoincrement is set to False on any integer primary key column:

    1. m = MetaData()
    2. t = Table('t', m,
    3. Column('id', Integer, primary_key=True, autoincrement=False),
    4. Column('x', Integer, autoincrement=True))
    5. m.create_all(engine)

    Changed in version 1.4: Added construct in a Column to specify the start and increment parameters of an IDENTITY. These replace the use of the object in order to specify these values.

    Deprecated since version 1.4: The mssql_identity_start and mssql_identity_increment parameters to Column are deprecated and should we replaced by an object. Specifying both ways of configuring an IDENTITY will result in a compile error. These options are also no longer returned as part of the dialect_options key in Inspector.get_columns(). Use the information in the identity key instead.

    Deprecated since version 1.3: The use of to specify IDENTITY characteristics is deprecated and will be removed in a future release. Please use the Identity object parameters and Identity.increment.

    Changed in version 1.4: Removed the ability to use a object to modify IDENTITY characteristics. Sequence objects now only manipulate true T-SQL SEQUENCE types.

    Note

    There can only be one IDENTITY column on the table. When using autoincrement=True to enable the IDENTITY keyword, SQLAlchemy does not guard against multiple columns specifying the option simultaneously. The SQL Server database will instead reject the CREATE TABLE statement.

    Note

    An INSERT statement which attempts to provide a value for a column that is marked with IDENTITY will be rejected by SQL Server. In order for the value to be accepted, a session-level option “SET IDENTITY_INSERT” must be enabled. The SQLAlchemy SQL Server dialect will perform this operation automatically when using a core construct; if the execution specifies a value for the IDENTITY column, the “IDENTITY_INSERT” option will be enabled for the span of that statement’s invocation.However, this scenario is not high performing and should not be relied upon for normal use. If a table doesn’t actually require IDENTITY behavior in its integer primary key column, the keyword should be disabled when creating the table by ensuring that autoincrement=False is set.

    Specific control over the “start” and “increment” values for the IDENTITY generator are provided using the and Identity.increment parameters passed to the object:

    1. from sqlalchemy import Table, Integer, Column, Identity
    2. test = Table(
    3. 'test', metadata,
    4. Column(
    5. 'id',
    6. Integer,
    7. primary_key=True,
    8. Identity(start=100, increment=10)
    9. ),
    10. Column('name', String(20))
    11. )

    The CREATE TABLE for the above Table object would be:

    1. CREATE TABLE test (
    2. id INTEGER NOT NULL IDENTITY(100,10) PRIMARY KEY,
    3. name VARCHAR(20) NULL,
    4. )

    Note

    The object supports many other parameter in addition to start and increment. These are not supported by SQL Server and will be ignored when generating the CREATE TABLE ddl.

    Changed in version 1.3.19: The Identity object is now used to affect the IDENTITY generator for a under SQL Server. Previously, the Sequence object was used. As SQL Server now supports real sequences as a separate construct, will be functional in the normal way starting from SQLAlchemy version 1.4.

    Using IDENTITY with Non-Integer numeric types

    SQL Server also allows IDENTITY to be used with NUMERIC columns. To implement this pattern smoothly in SQLAlchemy, the primary datatype of the column should remain as Integer, however the underlying implementation type deployed to the SQL Server database can be specified as Numeric using :

    1. from sqlalchemy import Column
    2. from sqlalchemy import Integer
    3. from sqlalchemy import Numeric
    4. from sqlalchemy import String
    5. from sqlalchemy.ext.declarative import declarative_base
    6. Base = declarative_base()
    7. class TestTable(Base):
    8. __tablename__ = "test"
    9. id = Column(
    10. Integer().with_variant(Numeric(10, 0), "mssql"),
    11. primary_key=True,
    12. autoincrement=True,
    13. )
    14. name = Column(String)

    In the above example, Integer().with_variant() provides clear usage information that accurately describes the intent of the code. The general restriction that autoincrement only applies to Integer is established at the metadata level and not at the per-dialect level.

    When using the above pattern, the primary key identifier that comes back from the insertion of a row, which is also the value that would be assigned to an ORM object such as TestTable above, will be an instance of Decimal() and not int when using SQL Server. The numeric return type of the Numeric type can be changed to return floats by passing False to . To normalize the return type of the above Numeric(10, 0) to return Python ints (which also support “long” integer values in Python 3), use TypeDecorator as follows:

    1. from sqlalchemy import TypeDecorator
    2. class NumericAsInteger(TypeDecorator):
    3. '''normalize floating point return values into ints'''
    4. impl = Numeric(10, 0, asdecimal=False)
    5. cache_ok = True
    6. def process_result_value(self, value, dialect):
    7. if value is not None:
    8. value = int(value)
    9. return value
    10. class TestTable(Base):
    11. __tablename__ = "test"
    12. id = Column(
    13. Integer().with_variant(NumericAsInteger, "mssql"),
    14. primary_key=True,
    15. autoincrement=True,
    16. )
    17. name = Column(String)

    INSERT behavior

    Handling of the IDENTITY column at INSERT time involves two key techniques. The most common is being able to fetch the “last inserted value” for a given IDENTITY column, a process which SQLAlchemy performs implicitly in many cases, most importantly within the ORM.

    The process for fetching this value has several variants:

    • In the vast majority of cases, RETURNING is used in conjunction with INSERT statements on SQL Server in order to get newly generated primary key values:

      1. INSERT INTO t (x) OUTPUT inserted.id VALUES (?)

      As of SQLAlchemy 2.0, the “Insert Many Values” Behavior for INSERT statements feature is also used by default to optimize many-row INSERT statements; for SQL Server the feature takes place for both RETURNING and-non RETURNING INSERT statements.

    • The value of defaults to 1000, however the ultimate page size for a particular INSERT statement may be limited further, based on an observed limit of 2100 bound parameters for a single statement in SQL Server. The page size may also be modified on a per-engine or per-statement basis; see the section Controlling the Batch Size for details.

    • When RETURNING is not available or has been disabled via implicit_returning=False, either the scope_identity() function or the @@identity variable is used; behavior varies by backend:

      • when using PyODBC, the phrase ; select scope_identity() will be appended to the end of the INSERT statement; a second result set will be fetched in order to receive the value. Given a table as:

        1. t = Table(
        2. 't',
        3. metadata,
        4. Column('id', Integer, primary_key=True),
        5. Column('x', Integer),
        6. implicit_returning=False
        7. )

        an INSERT will look like:

        1. INSERT INTO t (x) VALUES (?); select scope_identity()
      • Other dialects such as pymssql will call upon SELECT scope_identity() AS lastrowid subsequent to an INSERT statement. If the flag use_scope_identity=False is passed to , the statement SELECT @@identity AS lastrowid is used instead.

    A table that contains an IDENTITY column will prohibit an INSERT statement that refers to the identity column explicitly. The SQLAlchemy dialect will detect when an INSERT construct, created using a core insert() construct (not a plain string SQL), refers to the identity column, and in this case will emit SET IDENTITY_INSERT ON prior to the insert statement proceeding, and SET IDENTITY_INSERT OFF subsequent to the execution. Given this example:

    1. m = MetaData()
    2. t = Table('t', m, Column('id', Integer, primary_key=True),
    3. Column('x', Integer))
    4. m.create_all(engine)
    5. with engine.begin() as conn:
    6. conn.execute(t.insert(), {'id': 1, 'x':1}, {'id':2, 'x':2})

    The above column will be created with IDENTITY, however the INSERT statement we emit is specifying explicit values. In the echo output we can see how SQLAlchemy handles this:

    1. CREATE TABLE t (
    2. id INTEGER NOT NULL IDENTITY(1,1),
    3. x INTEGER NULL,
    4. PRIMARY KEY (id)
    5. )
    6. COMMIT
    7. SET IDENTITY_INSERT t ON
    8. INSERT INTO t (id, x) VALUES (?, ?)
    9. ((1, 1), (2, 2))
    10. SET IDENTITY_INSERT t OFF
    11. COMMIT

    This is an auxiliary use case suitable for testing and bulk insert scenarios.

    SEQUENCE support

    The Sequence object creates “real” sequences, i.e., CREATE SEQUENCE:

    1. >>> from sqlalchemy import Sequence
    2. >>> from sqlalchemy.schema import CreateSequence
    3. >>> from sqlalchemy.dialects import mssql
    4. >>> print(CreateSequence(Sequence("my_seq", start=1)).compile(dialect=mssql.dialect()))
    5. CREATE SEQUENCE my_seq START WITH 1

    For integer primary key generation, SQL Server’s IDENTITY construct should generally be preferred vs. sequence.

    Tip

    The default start value for T-SQL is -2**63 instead of 1 as in most other SQL databases. Users should explicitly set the to 1 if that’s the expected default:

    1. seq = Sequence("my_sequence", start=1)

    New in version 1.4: added SQL Server support for Sequence

    Changed in version 2.0: The SQL Server dialect will no longer implicitly render “START WITH 1” for CREATE SEQUENCE, which was the behavior first implemented in version 1.4.

    MAX on VARCHAR / NVARCHAR

    SQL Server supports the special string “MAX” within the VARCHAR and datatypes, to indicate “maximum length possible”. The dialect currently handles this as a length of “None” in the base type, rather than supplying a dialect-specific version of these types, so that a base type specified such as VARCHAR(None) can assume “unlengthed” behavior on more than one backend without using dialect-specific types.

    To build a SQL Server VARCHAR or NVARCHAR with MAX length, use None:

    1. my_table = Table(
    2. Column('my_data', VARCHAR(None)),
    3. Column('my_n_data', NVARCHAR(None))
    4. )

    Collation Support

    Character collations are supported by the base string types, specified by the string argument “collation”:

    1. from sqlalchemy import VARCHAR
    2. Column('login', VARCHAR(32, collation='Latin1_General_CI_AS'))

    When such a column is associated with a , the CREATE TABLE statement for this column will yield:

    1. login VARCHAR(32) COLLATE Latin1_General_CI_AS NULL

    LIMIT/OFFSET Support

    MSSQL has added support for LIMIT / OFFSET as of SQL Server 2012, via the “OFFSET n ROWS” and “FETCH NEXT n ROWS” clauses. SQLAlchemy supports these syntaxes automatically if SQL Server 2012 or greater is detected.

    Changed in version 1.4: support added for SQL Server “OFFSET n ROWS” and “FETCH NEXT n ROWS” syntax.

    For statements that specify only LIMIT and no OFFSET, all versions of SQL Server support the TOP keyword. This syntax is used for all SQL Server versions when no OFFSET clause is present. A statement such as:

    1. select(some_table).limit(5)

    will render similarly to:

    1. SELECT TOP 5 col1, col2.. FROM table

    For versions of SQL Server prior to SQL Server 2012, a statement that uses LIMIT and OFFSET, or just OFFSET alone, will be rendered using the ROW_NUMBER() window function. A statement such as:

    1. select(some_table).order_by(some_table.c.col3).limit(5).offset(10)

    will render similarly to:

    1. SELECT anon_1.col1, anon_1.col2 FROM (SELECT col1, col2,
    2. ROW_NUMBER() OVER (ORDER BY col3) AS
    3. mssql_rn FROM table WHERE t.x = :x_1) AS
    4. anon_1 WHERE mssql_rn > :param_1 AND mssql_rn <= :param_2 + :param_1

    Note that when using LIMIT and/or OFFSET, whether using the older or newer SQL Server syntaxes, the statement must have an ORDER BY as well, else a is raised.

    DDL Comment Support

    Comment support, which includes DDL rendering for attributes such as and Column.comment, as well as the ability to reflect these comments, is supported assuming a supported version of SQL Server is in use. If a non-supported version such as Azure Synapse is detected at first-connect time (based on the presence of the fn_listextendedproperty SQL function), comment support including rendering and table-comment reflection is disabled, as both features rely upon SQL Server stored procedures and functions that are not available on all backend types.

    To force comment support to be on or off, bypassing autodetection, set the parameter supports_comments within :

    1. e = create_engine("mssql+pyodbc://u:p@dsn", supports_comments=False)

    New in version 2.0: Added support for table and column comments for the SQL Server dialect, including DDL generation and reflection.

    All SQL Server dialects support setting of transaction isolation level both via a dialect-specific parameter accepted by create_engine(), as well as the argument as passed to Connection.execution_options(). This feature works by issuing the command SET TRANSACTION ISOLATION LEVEL <level> for each new connection.

    To set isolation level using :

    1. engine = create_engine(
    2. "mssql+pyodbc://scott:tiger@ms_2008",
    3. isolation_level="REPEATABLE READ"
    4. )

    To set using per-connection execution options:

    Valid values for isolation_level include:

    • AUTOCOMMIT - pyodbc / pymssql-specific

    • READ COMMITTED

    • READ UNCOMMITTED

    • REPEATABLE READ

    • SERIALIZABLE

    • SNAPSHOT - specific to SQL Server

    There are also more options for isolation level configurations, such as “sub-engine” objects linked to a main Engine which each apply different isolation level settings. See the discussion at for background.

    See also

    Setting Transaction Isolation Levels including DBAPI Autocommit

    Temporary Table / Resource Reset for Connection Pooling

    The QueuePool connection pool implementation used by the SQLAlchemy object includes reset on return behavior that will invoke the DBAPI .rollback() method when connections are returned to the pool. While this rollback will clear out the immediate state used by the previous transaction, it does not cover a wider range of session-level state, including temporary tables as well as other server state such as prepared statement handles and statement caches. An undocumented SQL Server procedure known as sp_reset_connection is known to be a workaround for this issue which will reset most of the session state that builds up on a connection, including temporary tables.

    To install sp_reset_connection as the means of performing reset-on-return, the event hook may be used, as demonstrated in the example below. The create_engine.pool_reset_on_return parameter is set to None so that the custom scheme can replace the default behavior completely. The custom hook implementation calls .rollback() in any case, as it’s usually important that the DBAPI’s own tracking of commit/rollback will remain consistent with the state of the transaction:

    1. from sqlalchemy import create_engine
    2. from sqlalchemy import event
    3. "mssql+pyodbc://scott:tiger^5HHH@mssql2017:1433/test?driver=ODBC+Driver+17+for+SQL+Server",
    4. # disable default reset-on-return scheme
    5. pool_reset_on_return=None,
    6. )
    7. @event.listens_for(mssql_engine, "reset")
    8. def _reset_mssql(dbapi_connection, connection_record, reset_state):
    9. if not reset_state.terminate_only:
    10. dbapi_connection.execute("{call sys.sp_reset_connection}")
    11. # so that the DBAPI itself knows that the connection has been
    12. # reset
    13. dbapi_connection.rollback()

    Changed in version 2.0.0b3: Added additional state arguments to the event and additionally ensured the event is invoked for all “reset” occurrences, so that it’s appropriate as a place for custom “reset” handlers. Previous schemes which use the PoolEvents.checkin() handler remain usable as well.

    See also

    - in the Connection Pooling documentation

    Nullability

    MSSQL has support for three levels of column nullability. The default nullability allows nulls and is explicit in the CREATE TABLE construct:

    1. name VARCHAR(20) NULL

    If nullable=None is specified then no specification is made. In other words the database’s configured default is used. This will render:

    1. name VARCHAR(20)

    If nullable is True or False then the column will be NULL or NOT NULL respectively.

    Date / Time Handling

    DATE and TIME are supported. Bind parameters are converted to datetime.datetime() objects as required by most MSSQL drivers, and results are processed from strings if needed. The DATE and TIME types are not available for MSSQL 2005 and previous - if a server version below 2008 is detected, DDL for these types will be issued as DATETIME.

    Large Text/Binary Type Deprecation

    Per SQL Server 2012/2014 Documentation, the NTEXT, TEXT and IMAGE datatypes are to be removed from SQL Server in a future release. SQLAlchemy normally relates these types to the , TextClause and datatypes.

    In order to accommodate this change, a new flag deprecate_large_types is added to the dialect, which will be automatically set based on detection of the server version in use, if not otherwise set by the user. The behavior of this flag is as follows:

    • When this flag is True, the UnicodeText, and LargeBinary datatypes, when used to render DDL, will render the types NVARCHAR(max), VARCHAR(max), and VARBINARY(max), respectively. This is a new behavior as of the addition of this flag.

    • When this flag is False, the , TextClause and datatypes, when used to render DDL, will render the types NTEXT, TEXT, and IMAGE, respectively. This is the long-standing behavior of these types.

    • The flag begins with the value None, before a database connection is established. If the dialect is used to render DDL without the flag being set, it is interpreted the same as False.

    • On first connection, the dialect detects if SQL Server version 2012 or greater is in use; if the flag is still at None, it sets it to True or False based on whether 2012 or greater is detected.

    • The flag can be set to either True or False when the dialect is created, typically via create_engine():

      1. eng = create_engine("mssql+pymssql://user:pass@host/db",
      2. deprecate_large_types=True)
    • Complete control over whether the “old” or “new” types are rendered is available in all SQLAlchemy versions by using the UPPERCASE type objects instead: , VARCHAR, , TEXT, , IMAGE will always remain fixed and always output exactly that type.

    New in version 1.0.0.

    Multipart Schema Names

    SQL Server schemas sometimes require multiple parts to their “schema” qualifier, that is, including the database name and owner name as separate tokens, such as mydatabase.dbo.some_table. These multipart names can be set at once using the Table.schema argument of :

    1. Table(
    2. "some_table", metadata,
    3. Column("q", String(50)),
    4. schema="mydatabase.dbo"
    5. )

    When performing operations such as table or component reflection, a schema argument that contains a dot will be split into separate “database” and “owner” components in order to correctly query the SQL Server information schema tables, as these two values are stored separately. Additionally, when rendering the schema name for DDL or SQL, the two components will be quoted separately for case sensitive names and other special characters. Given an argument as below:

    1. Table(
    2. "some_table", metadata,
    3. Column("q", String(50)),
    4. schema="MyDataBase.dbo"
    5. )

    The above schema would be rendered as [MyDataBase].dbo, and also in reflection, would be reflected using “dbo” as the owner and “MyDataBase” as the database name.

    To control how the schema name is broken into database / owner, specify brackets (which in SQL Server are quoting characters) in the name. Below, the “owner” will be considered as MyDataBase.dbo and the “database” will be None:

    1. Table(
    2. "some_table", metadata,
    3. Column("q", String(50)),
    4. schema="[MyDataBase.dbo]"
    5. )

    To individually specify both database and owner name with special characters or embedded dots, use two sets of brackets:

    1. Table(
    2. "some_table", metadata,
    3. Column("q", String(50)),
    4. schema="[MyDataBase.Period].[MyOwner.Dot]"
    5. )

    Changed in version 1.2: the SQL Server dialect now treats brackets as identifier delimiters splitting the schema into separate database and owner tokens, to allow dots within either name itself.

    Legacy Schema Mode

    Very old versions of the MSSQL dialect introduced the behavior such that a schema-qualified table would be auto-aliased when used in a SELECT statement; given a table:

    1. account_table = Table(
    2. 'account', metadata,
    3. Column('id', Integer, primary_key=True),
    4. Column('info', String(100)),
    5. schema="customer_schema"
    6. )

    this legacy mode of rendering would assume that “customer_schema.account” would not be accepted by all parts of the SQL statement, as illustrated below:

    1. >>> eng = create_engine("mssql+pymssql://mydsn", legacy_schema_aliasing=True)
    2. >>> print(account_table.select().compile(eng))
    3. SELECT account_1.id, account_1.info
    4. FROM customer_schema.account AS account_1

    This mode of behavior is now off by default, as it appears to have served no purpose; however in the case that legacy applications rely upon it, it is available using the legacy_schema_aliasing argument to as illustrated above.

    Changed in version 1.1: the legacy_schema_aliasing flag introduced in version 1.0.5 to allow disabling of legacy mode for schemas now defaults to False.

    Deprecated since version 1.4: The legacy_schema_aliasing flag is now deprecated and will be removed in a future release.

    Clustered Index Support

    The MSSQL dialect supports clustered indexes (and primary keys) via the mssql_clustered option. This option is available to , UniqueConstraint. and .

    To generate a clustered index:

    1. Index("my_index", table.c.x, mssql_clustered=True)

    which renders the index as CREATE CLUSTERED INDEX my_index ON table (x).

    To generate a clustered primary key use:

    1. Table('my_table', metadata,
    2. Column('x', ...),
    3. Column('y', ...),
    4. PrimaryKeyConstraint("x", "y", mssql_clustered=True))

    which will render the table, for example, as:

    1. CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
    2. PRIMARY KEY CLUSTERED (x, y))

    Similarly, we can generate a clustered unique constraint using:

    1. Table('my_table', metadata,
    2. Column('x', ...),
    3. Column('y', ...),
    4. PrimaryKeyConstraint("x"),
    5. UniqueConstraint("y", mssql_clustered=True),
    6. )

    To explicitly request a non-clustered primary key (for example, when a separate clustered index is desired), use:

    1. Table('my_table', metadata,
    2. Column('x', ...),
    3. Column('y', ...),
    4. PrimaryKeyConstraint("x", "y", mssql_clustered=False))

    which will render the table, for example, as:

    1. CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
    2. PRIMARY KEY NONCLUSTERED (x, y))

    Changed in version 1.1: the mssql_clustered option now defaults to None, rather than False. mssql_clustered=False now explicitly renders the NONCLUSTERED clause, whereas None omits the CLUSTERED clause entirely, allowing SQL Server defaults to take effect.

    In addition to clustering, the MSSQL dialect supports other special options for .

    INCLUDE

    The mssql_include option renders INCLUDE(colname) for the given string names:

    1. Index("my_index", table.c.x, mssql_include=['y'])

    would render the index as CREATE INDEX my_index ON table (x) INCLUDE (y)

    Filtered Indexes

    The mssql_where option renders WHERE(condition) for the given string names:

    1. Index("my_index", table.c.x, mssql_where=table.c.x > 10)

    would render the index as CREATE INDEX my_index ON table (x) WHERE x > 10.

    New in version 1.3.4.

    Index ordering is available via functional expressions, such as:

    1. Index("my_index", table.c.x.desc())

    would render the index as CREATE INDEX my_index ON table (x DESC)

    See also

    Compatibility Levels

    MSSQL supports the notion of setting compatibility levels at the database level. This allows, for instance, to run a database that is compatible with SQL2000 while running on a SQL2005 database server. server_version_info will always return the database server version information (in this case SQL2005) and not the compatibility level information. Because of this, if running under a backwards compatibility mode SQLAlchemy may attempt to use T-SQL statements that are unable to be parsed by the database server.

    Triggers

    SQLAlchemy by default uses OUTPUT INSERTED to get at newly generated primary key values via IDENTITY columns or other server side defaults. MS-SQL does not allow the usage of OUTPUT INSERTED on tables that have triggers. To disable the usage of OUTPUT INSERTED on a per-table basis, specify implicit_returning=False for each Table which has triggers:

    1. Table('mytable', metadata,
    2. Column('id', Integer, primary_key=True),
    3. # ...,
    4. implicit_returning=False
    5. )

    Declarative form:

    1. class MyClass(Base):
    2. # ...
    3. __table_args__ = {'implicit_returning':False}

    Rowcount Support / ORM Versioning

    The SQL Server drivers may have limited ability to return the number of rows updated from an UPDATE or DELETE statement.

    As of this writing, the PyODBC driver is not able to return a rowcount when OUTPUT INSERTED is used. This impacts the SQLAlchemy ORM’s versioning feature in many cases where server-side value generators are in use in that while the versioning operations can succeed, the ORM cannot always check that an UPDATE or DELETE statement matched the number of rows expected, which is how it verifies that the version identifier matched. When this condition occurs, a warning will be emitted but the operation will proceed.

    The use of OUTPUT INSERTED can be disabled by setting the Table.implicit_returning flag to False on a particular , which in declarative looks like:

    1. class MyTable(Base):
    2. __tablename__ = 'mytable'
    3. id = Column(Integer, primary_key=True)
    4. stuff = Column(String(10))
    5. timestamp = Column(TIMESTAMP(), default=text('DEFAULT'))
    6. __mapper_args__ = {
    7. 'version_id_col': timestamp,
    8. 'version_id_generator': False,
    9. }
    10. __table_args__ = {
    11. 'implicit_returning': False
    12. }

    Enabling Snapshot Isolation

    SQL Server has a default transaction isolation mode that locks entire tables, and causes even mildly concurrent applications to have long held locks and frequent deadlocks. Enabling snapshot isolation for the database as a whole is recommended for modern levels of concurrency support. This is accomplished via the following ALTER DATABASE commands executed at the SQL prompt:

    1. ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
    2. ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON

    Background on SQL Server snapshot isolation is available at .

    SQL Server SQL Constructs

    function sqlalchemy.dialects.mssql.try_cast(*arg, **kw)

    Create a TRY_CAST expression.

    TryCast is a subclass of SQLAlchemy’s construct, and works in the same way, except that the SQL expression rendered is “TRY_CAST” rather than “CAST”:

    The above would render:

    1. SELECT TRY_CAST (product_table.unit_price AS NUMERIC(10, 4))
    2. FROM product_table

    New in version 1.3.7.

    SQL Server Data Types

    As with all SQLAlchemy dialects, all UPPERCASE types that are known to be valid with SQL server are importable from the top level dialect, whether they originate from or from the local dialect:

    1. from sqlalchemy.dialects.mssql import (
    2. BIGINT,
    3. BINARY,
    4. BIT,
    5. CHAR,
    6. DATE,
    7. DATETIME,
    8. DATETIME2,
    9. DATETIMEOFFSET,
    10. DECIMAL,
    11. FLOAT,
    12. IMAGE,
    13. INTEGER,
    14. JSON,
    15. MONEY,
    16. NCHAR,
    17. NTEXT,
    18. NUMERIC,
    19. NVARCHAR,
    20. REAL,
    21. SMALLDATETIME,
    22. SMALLINT,
    23. SMALLMONEY,
    24. SQL_VARIANT,
    25. TEXT,
    26. TIME,
    27. TIMESTAMP,
    28. UNIQUEIDENTIFIER,
    29. VARBINARY,
    30. VARCHAR,
    31. )

    Types which are specific to SQL Server, or have SQL Server-specific construction arguments, are as follows:

    class sqlalchemy.dialects.mssql.BIT

    MSSQL BIT type.

    Both pyodbc and pymssql return values from BIT columns as Python <class ‘bool’> so just subclass Boolean.

    Members

    __init__()

    Class signature

    class (sqlalchemy.types.Boolean)

    • method __init__(create_constraint: bool = False, name: Optional[str] = None, _create_events: bool = True, _adapted_from: Optional[SchemaType] = None)

      inherited from the sqlalchemy.types.Boolean.__init__ method of

      Construct a Boolean.

      • Parameters:

        • create_constraint

          defaults to False. If the boolean is generated as an int/smallint, also create a CHECK constraint on the table that ensures 1 or 0 as a value.

          Note

          it is strongly recommended that the CHECK constraint have an explicit name in order to support schema-management concerns. This can be established either by setting the Boolean.name parameter or by setting up an appropriate naming convention; see for background.

          Changed in version 1.4: - this flag now defaults to False, meaning no CHECK constraint is generated for a non-native enumerated type.

        • name – if a CHECK constraint is generated, specify the name of the constraint.

    class sqlalchemy.dialects.mssql.CHAR

    The SQL CHAR type.

    Class signature

    class sqlalchemy.dialects.mssql.CHAR (sqlalchemy.types.String)

    • method __init__(length: Optional[int] = None, collation: Optional[str] = None)

      inherited from the sqlalchemy.types.String.__init__ method of String

      Create a string-holding type.

      • Parameters:

        • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.

        • collation

          Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:

          1. >>> from sqlalchemy import cast, select, String
          2. >>> print(select(cast('some string', String(collation='utf8'))))
          3. SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

          Note

          In most cases, the or UnicodeText datatypes should be used for a that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database.

    class sqlalchemy.dialects.mssql.DATETIME2

    Class signature

    class sqlalchemy.dialects.mssql.DATETIME2 (sqlalchemy.dialects.mssql.base._DateTimeBase, )

    class sqlalchemy.dialects.mssql.DATETIMEOFFSET

    Class signature

    class sqlalchemy.dialects.mssql.DATETIMEOFFSET (sqlalchemy.dialects.mssql.base._DateTimeBase, )

    class sqlalchemy.dialects.mssql.IMAGE

    Members

    __init__()

    Class signature

    class (sqlalchemy.types.LargeBinary)

    • method __init__(length: Optional[int] = None)

      inherited from the sqlalchemy.types.LargeBinary.__init__ method of LargeBinary

      Construct a LargeBinary type.

      • Parameters:

        length – optional, a length for the column for use in DDL statements, for those binary types that accept a length, such as the MySQL BLOB type.

    class sqlalchemy.dialects.mssql.JSON

    MSSQL JSON type.

    MSSQL supports JSON-formatted data as of SQL Server 2016.

    The datatype at the DDL level will represent the datatype as NVARCHAR(max), but provides for JSON-level comparison functions as well as Python coercion behavior.

    JSON is used automatically whenever the base datatype is used against a SQL Server backend.

    See also

    JSON - main documentation for the generic cross-platform JSON datatype.

    The type supports persistence of JSON values as well as the core index operations provided by JSON datatype, by adapting the operations to render the or JSON_QUERY functions at the database level.

    The SQL Server type necessarily makes use of the JSON_QUERY and JSON_VALUE functions when querying for elements of a JSON object. These two functions have a major restriction in that they are mutually exclusive based on the type of object to be returned. The JSON_QUERY function only returns a JSON dictionary or list, but not an individual string, numeric, or boolean element; the JSON_VALUE function only returns an individual string, numeric, or boolean element. both functions either return NULL or raise an error if they are not used against the correct expected value.

    To handle this awkward requirement, indexed access rules are as follows:

    1. When extracting a sub element from a JSON that is itself a JSON dictionary or list, the Comparator.as_json() accessor should be used:

      1. stmt = select(
      2. data_table.c.data["some key"].as_json()
      3. ).where(
      4. data_table.c.data["some key"].as_json() == {"sub": "structure"}
      5. )
    2. When extracting a sub element from a JSON that is a plain boolean, string, integer, or float, use the appropriate method among , Comparator.as_string(), , Comparator.as_float():

      1. stmt = select(
      2. data_table.c.data["some key"].as_string()
      3. ).where(
      4. data_table.c.data["some key"].as_string() == "some string"
      5. )

    New in version 1.4.

    Members

    Class signature

    class sqlalchemy.dialects.mssql.JSON ()

    • method sqlalchemy.dialects.mssql.JSON.__init__(none_as_null: bool = False)

      inherited from the sqlalchemy.types.JSON.__init__ method of

      Construct a JSON type.

      • Parameters:

        none_as_null=False

        if True, persist the value None as a SQL NULL value, not the JSON encoding of null. Note that when this flag is False, the construct can still be used to persist a NULL value, which may be passed directly as a parameter value that is specially interpreted by the JSON type as SQL NULL:

        1. from sqlalchemy import null
        2. conn.execute(table.insert(), {"data": null()})

        Note

        does not apply to the values passed to Column.default and ; a value of None passed for these parameters means “no default present”.

        Additionally, when used in SQL comparison expressions, the Python value None continues to refer to SQL null, and not JSON NULL. The JSON.none_as_null flag refers explicitly to the persistence of the value within an INSERT or UPDATE statement. The value should be used for SQL expressions that wish to compare to JSON null.

        See also

        JSON.NULL

    class sqlalchemy.dialects.mssql.MONEY

    Class signature

    class (sqlalchemy.types.TypeEngine)

    class sqlalchemy.dialects.mssql.NCHAR

    The SQL NCHAR type.

    Class signature

    class sqlalchemy.dialects.mssql.NCHAR ()

    • method sqlalchemy.dialects.mssql.NCHAR.__init__(length=None, **kwargs)

      inherited from the sqlalchemy.types.Unicode.__init__ method of

      Create a Unicode object.

      Parameters are the same as that of .

    class sqlalchemy.dialects.mssql.NTEXT

    MSSQL NTEXT type, for variable-length unicode text up to 2^30 characters.

    Members

    __init__()

    Class signature

    class (sqlalchemy.types.UnicodeText)

    • method __init__(length=None, **kwargs)

      inherited from the sqlalchemy.types.UnicodeText.__init__ method of UnicodeText

      Create a Unicode-converting Text type.

      Parameters are the same as that of .

    class sqlalchemy.dialects.mssql.NVARCHAR

    The SQL NVARCHAR type.

    Class signature

    class sqlalchemy.dialects.mssql.NVARCHAR (sqlalchemy.types.Unicode)

    • method __init__(length=None, **kwargs)

      inherited from the sqlalchemy.types.Unicode.__init__ method of Unicode

      Create a object.

      Parameters are the same as that of String.

    class sqlalchemy.dialects.mssql.REAL

    Class signature

    class (sqlalchemy.types.REAL)

    class sqlalchemy.dialects.mssql.ROWVERSION

    Implement the SQL Server ROWVERSION type.

    The ROWVERSION datatype is a SQL Server synonym for the TIMESTAMP datatype, however current SQL Server documentation suggests using ROWVERSION for new datatypes going forward.

    The ROWVERSION datatype does not reflect (e.g. introspect) from the database as itself; the returned datatype will be .

    This is a read-only datatype that does not support INSERT of values.

    New in version 1.2.

    See also

    TIMESTAMP

    Members

    Class signature

    class sqlalchemy.dialects.mssql.ROWVERSION ()

    • method sqlalchemy.dialects.mssql.ROWVERSION.__init__(convert_int=False)

      inherited from the sqlalchemy.dialects.mssql.base.TIMESTAMP.__init__ method of

      Construct a TIMESTAMP or ROWVERSION type.

      • Parameters:

        convert_int – if True, binary integer values will be converted to integers on read.

      New in version 1.2.

    class sqlalchemy.dialects.mssql.SMALLDATETIME

    Members

    __init__()

    Class signature

    class (sqlalchemy.dialects.mssql.base._DateTimeBase, sqlalchemy.types.DateTime)

    • inherited from the sqlalchemy.types.DateTime.__init__ method of

      Construct a new DateTime.

      • Parameters:

        timezone – boolean. Indicates that the datetime type should enable timezone support, if available on the base date/time-holding type only. It is recommended to make use of the datatype directly when using this flag, as some databases include separate generic date/time-holding types distinct from the timezone-capable TIMESTAMP datatype, such as Oracle.

    class sqlalchemy.dialects.mssql.SMALLMONEY

    Class signature

    class sqlalchemy.dialects.mssql.SMALLMONEY ()

    class sqlalchemy.dialects.mssql.SQL_VARIANT

    Class signature

    class sqlalchemy.dialects.mssql.SQL_VARIANT ()

    class sqlalchemy.dialects.mssql.TEXT

    The SQL TEXT type.

    Class signature

    class sqlalchemy.dialects.mssql.TEXT (sqlalchemy.types.Text)

    • method __init__(length: Optional[int] = None, collation: Optional[str] = None)

      inherited from the sqlalchemy.types.String.__init__ method of String

      Create a string-holding type.

      • Parameters:

        • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.

        • collation

          Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:

          1. >>> from sqlalchemy import cast, select, String
          2. >>> print(select(cast('some string', String(collation='utf8'))))
          3. SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

          Note

          In most cases, the or UnicodeText datatypes should be used for a that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database.

    class sqlalchemy.dialects.mssql.TIME

    Class signature

    class sqlalchemy.dialects.mssql.TIME ()

    class sqlalchemy.dialects.mssql.TIMESTAMP

    Implement the SQL Server TIMESTAMP type.

    Note this is completely different than the SQL Standard TIMESTAMP type, which is not supported by SQL Server. It is a read-only datatype that does not support INSERT of values.

    New in version 1.2.

    See also

    ROWVERSION

    Members

    Class signature

    class sqlalchemy.dialects.mssql.TIMESTAMP (sqlalchemy.types._Binary)

    • method __init__(convert_int=False)

      Construct a TIMESTAMP or ROWVERSION type.

      • Parameters:

        convert_int – if True, binary integer values will be converted to integers on read.

      New in version 1.2.

    class sqlalchemy.dialects.mssql.TINYINT

    Class signature

    class sqlalchemy.dialects.mssql.TINYINT ()

    class sqlalchemy.dialects.mssql.UNIQUEIDENTIFIER

    Members

    __init__()

    Class signature

    class (sqlalchemy.types.Uuid)

    • method __init__(as_uuid: bool = True)

      Construct a UNIQUEIDENTIFIER type.

      • Parameters:

        as_uuid=True

        if True, values will be interpreted as Python uuid objects, converting to/from string via the DBAPI.

    class sqlalchemy.dialects.mssql.VARBINARY

    The MSSQL VARBINARY type.

    This type adds additional features to the core type, including “deprecate_large_types” mode where either VARBINARY(max) or IMAGE is rendered, as well as the SQL Server FILESTREAM option.

    New in version 1.0.0.

    See also

    Large Text/Binary Type Deprecation

    Class signature

    class sqlalchemy.dialects.mssql.VARBINARY (, sqlalchemy.types.LargeBinary)

    • method __init__(length=None, filestream=False)

      Construct a VARBINARY type.

      • Parameters:

        • length – optional, a length for the column for use in DDL statements, for those binary types that accept a length, such as the MySQL BLOB type.

        • filestream=False

          if True, renders the FILESTREAM keyword in the table definition. In this case length must be None or 'max'.

          New in version 1.4.31.

    class sqlalchemy.dialects.mssql.VARCHAR

    The SQL VARCHAR type.

    Class signature

    class sqlalchemy.dialects.mssql.VARCHAR (sqlalchemy.types.String)

    • method __init__(length: Optional[int] = None, collation: Optional[str] = None)

      inherited from the sqlalchemy.types.String.__init__ method of String

      Create a string-holding type.

      • Parameters:

        • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.

        • collation

          Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:

          1. >>> from sqlalchemy import cast, select, String
          2. >>> print(select(cast('some string', String(collation='utf8'))))
          3. SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

          Note

          In most cases, the or UnicodeText datatypes should be used for a that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database.

    class sqlalchemy.dialects.mssql.XML

    MSSQL XML type.

    This is a placeholder type for reflection purposes that does not include any Python-side datatype support. It also does not currently support additional arguments, such as “CONTENT”, “DOCUMENT”, “xml_schema_collection”.

    New in version 1.1.11.

    Members

    __init__()

    Class signature

    class (sqlalchemy.types.Text)

    • method __init__(length: Optional[int] = None, collation: Optional[str] = None)

      inherited from the sqlalchemy.types.String.__init__ method of String

      Create a string-holding type.

      • Parameters:

        • length – optional, a length for the column for use in DDL and CAST expressions. May be safely omitted if no CREATE TABLE will be issued. Certain databases may require a length for use in DDL, and will raise an exception when the CREATE TABLE DDL is issued if a VARCHAR with no length is included. Whether the value is interpreted as bytes or characters is database specific.

        • collation

          Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and PostgreSQL. E.g.:

          1. >>> from sqlalchemy import cast, select, String
          2. >>> print(select(cast('some string', String(collation='utf8'))))
          3. SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1

          Note

          In most cases, the or UnicodeText datatypes should be used for a that expects to store non-ascii data. These datatypes will ensure that the correct types are used on the database.

    PyODBC

    Support for the Microsoft SQL Server database via the PyODBC driver.

    DBAPI

    Documentation and download information (if applicable) for PyODBC is available at: https://pypi.org/project/pyodbc/

    Connecting

    Connect String:

    1. mssql+pyodbc://<username>:<password>@<dsnname>

    Connecting to PyODBC

    The URL here is to be translated to PyODBC connection strings, as detailed in .

    DSN Connections

    A DSN connection in ODBC means that a pre-existing ODBC datasource is configured on the client machine. The application then specifies the name of this datasource, which encompasses details such as the specific ODBC driver in use as well as the network address of the database. Assuming a datasource is configured on the client, a basic DSN-based connection looks like:

    1. engine = create_engine("mssql+pyodbc://scott:tiger@some_dsn")

    Which above, will pass the following connection string to PyODBC:

    1. DSN=some_dsn;UID=scott;PWD=tiger

    If the username and password are omitted, the DSN form will also add the Trusted_Connection=yes directive to the ODBC string.

    Hostname Connections

    Hostname-based connections are also supported by pyodbc. These are often easier to use than a DSN and have the additional advantage that the specific database name to connect towards may be specified locally in the URL, rather than it being fixed as part of a datasource configuration.

    When using a hostname connection, the driver name must also be specified in the query parameters of the URL. As these names usually have spaces in them, the name must be URL encoded which means using plus signs for spaces:

    1. engine = create_engine("mssql+pyodbc://scott:tiger@myhost:port/databasename?driver=ODBC+Driver+17+for+SQL+Server")

    The driver keyword is significant to the pyodbc dialect and must be specified in lowercase.

    Any other names passed in the query string are passed through in the pyodbc connect string, such as authentication, TrustServerCertificate, etc. Multiple keyword arguments must be separated by an ampersand (&); these will be translated to semicolons when the pyodbc connect string is generated internally:

    1. e = create_engine(
    2. "mssql+pyodbc://scott:tiger@mssql2017:1433/test?"
    3. "driver=ODBC+Driver+18+for+SQL+Server&TrustServerCertificate=yes"
    4. "&authentication=ActiveDirectoryIntegrated"
    5. )

    The equivalent URL can be constructed using URL:

    1. from sqlalchemy.engine import URL
    2. connection_url = URL.create(
    3. "mssql+pyodbc",
    4. username="scott",
    5. password="tiger",
    6. host="mssql2017",
    7. port=1433,
    8. database="test",
    9. query={
    10. "driver": "ODBC Driver 18 for SQL Server",
    11. "TrustServerCertificate": "yes",
    12. "authentication": "ActiveDirectoryIntegrated",
    13. },
    14. )

    Pass through exact Pyodbc string

    A PyODBC connection string can also be sent in pyodbc’s format directly, as specified in the PyODBC documentation, using the parameter odbc_connect. A object can help make this easier:

    1. from sqlalchemy.engine import URL
    2. connection_string = "DRIVER={SQL Server Native Client 10.0};SERVER=dagger;DATABASE=test;UID=user;PWD=password"
    3. connection_url = URL.create("mssql+pyodbc", query={"odbc_connect": connection_string})
    4. engine = create_engine(connection_url)

    Connecting to databases with access tokens

    Some database servers are set up to only accept access tokens for login. For example, SQL Server allows the use of Azure Active Directory tokens to connect to databases. This requires creating a credential object using the azure-identity library. More information about the authentication step can be found in .

    After getting an engine, the credentials need to be sent to pyodbc.connect each time a connection is requested. One way to do this is to set up an event listener on the engine that adds the credential token to the dialect’s connect call. This is discussed more generally in Generating dynamic authentication tokens. For SQL Server in particular, this is passed as an ODBC connection attribute with a data structure .

    The following code snippet will create an engine that connects to an Azure SQL database using Azure credentials:

    1. import struct
    2. from sqlalchemy import create_engine, event
    3. from sqlalchemy.engine.url import URL
    4. from azure import identity
    5. SQL_COPT_SS_ACCESS_TOKEN = 1256 # Connection option for access tokens, as defined in msodbcsql.h
    6. TOKEN_URL = "https://database.windows.net/" # The token URL for any Azure SQL database
    7. connection_string = "mssql+pyodbc://@my-server.database.windows.net/myDb?driver=ODBC+Driver+17+for+SQL+Server"
    8. engine = create_engine(connection_string)
    9. azure_credentials = identity.DefaultAzureCredential()
    10. @event.listens_for(engine, "do_connect")
    11. def provide_token(dialect, conn_rec, cargs, cparams):
    12. # remove the "Trusted_Connection" parameter that SQLAlchemy adds
    13. cargs[0] = cargs[0].replace(";Trusted_Connection=Yes", "")
    14. # create token credential
    15. raw_token = azure_credentials.get_token(TOKEN_URL).token.encode("utf-16-le")
    16. token_struct = struct.pack(f"<I{len(raw_token)}s", len(raw_token), raw_token)
    17. # apply it to keyword arguments
    18. cparams["attrs_before"] = {SQL_COPT_SS_ACCESS_TOKEN: token_struct}

    Tip

    The Trusted_Connection token is currently added by the SQLAlchemy pyodbc dialect when no username or password is present. This needs to be removed per Microsoft’s documentation for Azure access tokens, stating that a connection string when using an access token must not contain UID, PWD, Authentication or Trusted_Connection parameters.

    Azure Synapse Analytics has a significant difference in its transaction handling compared to plain SQL Server; in some cases an error within a Synapse transaction can cause it to be arbitrarily terminated on the server side, which then causes the DBAPI .rollback() method (as well as .commit()) to fail. The issue prevents the usual DBAPI contract of allowing .rollback() to pass silently if no transaction is present as the driver does not expect this condition. The symptom of this failure is an exception with a message resembling ‘No corresponding transaction found. (111214)’ when attempting to emit a .rollback() after an operation had a failure of some kind.

    This specific case can be handled by passing ignore_no_transaction_on_rollback=True to the SQL Server dialect via the create_engine() function as follows:

    1. engine = create_engine(connection_url, ignore_no_transaction_on_rollback=True)

    Using the above parameter, the dialect will catch ProgrammingError exceptions raised during connection.rollback() and emit a warning if the error message contains code 111214, however will not raise an exception.

    New in version 1.4.40: Added the ignore_no_transaction_on_rollback=True parameter.

    Enable autocommit for Azure SQL Data Warehouse (DW) connections

    Azure SQL Data Warehouse does not support transactions, and that can cause problems with SQLAlchemy’s “autobegin” (and implicit commit/rollback) behavior. We can avoid these problems by enabling autocommit at both the pyodbc and engine levels:

    1. connection_url = sa.engine.URL.create(
    2. "mssql+pyodbc",
    3. username="scott",
    4. password="tiger",
    5. host="dw.azure.example.com",
    6. database="mydb",
    7. query={
    8. "driver": "ODBC Driver 17 for SQL Server",
    9. "autocommit": "True",
    10. },
    11. )
    12. engine = create_engine(connection_url).execution_options(
    13. isolation_level="AUTOCOMMIT"
    14. )

    Avoiding sending large string parameters as TEXT/NTEXT

    By default, for historical reasons, Microsoft’s ODBC drivers for SQL Server send long string parameters (greater than 4000 SBCS characters or 2000 Unicode characters) as TEXT/NTEXT values. TEXT and NTEXT have been deprecated for many years and are starting to cause compatibility issues with newer versions of SQL_Server/Azure. For example, see .

    Starting with ODBC Driver 18 for SQL Server we can override the legacy behavior and pass long strings as varchar(max)/nvarchar(max) using the LongAsMax=Yes connection string parameter:

    1. connection_url = sa.engine.URL.create(
    2. "mssql+pyodbc",
    3. username="scott",
    4. password="tiger",
    5. host="mssqlserver.example.com",
    6. database="mydb",
    7. query={
    8. "driver": "ODBC Driver 18 for SQL Server",
    9. "LongAsMax": "Yes",
    10. },
    11. )

    Pyodbc Pooling / connection close behavior

    PyODBC uses internal by default, which means connections will be longer lived than they are within SQLAlchemy itself. As SQLAlchemy has its own pooling behavior, it is often preferable to disable this behavior. This behavior can only be disabled globally at the PyODBC module level, before any connections are made:

    1. import pyodbc
    2. pyodbc.pooling = False
    3. # don't use the engine before pooling is set to False
    4. engine = create_engine("mssql+pyodbc://user:pass@dsn")

    If this variable is left at its default value of True, the application will continue to maintain active database connections, even when the SQLAlchemy engine itself fully discards a connection or if the engine is disposed.

    See also

    pooling - in the PyODBC documentation.

    PyODBC works best with Microsoft ODBC drivers, particularly in the area of Unicode support on both Python 2 and Python 3.

    Using the FreeTDS ODBC drivers on Linux or OSX with PyODBC is not recommended; there have been historically many Unicode-related issues in this area, including before Microsoft offered ODBC drivers for Linux and OSX. Now that Microsoft offers drivers for all platforms, for PyODBC support these are recommended. FreeTDS remains relevant for non-ODBC drivers such as pymssql where it works very well.

    Rowcount Support

    Pyodbc only has partial support for rowcount. See the notes at for important notes when using ORM versioning.

    Fast Executemany Mode

    Note

    SQLAlchemy 2.0 now includes an equivalent “fast executemany” handler for INSERT statements that is more robust than the PyODBC feature; the feature is called and is enabled by default for all INSERT statements used by SQL Server. SQLAlchemy’s feature integrates with the PyODBC setinputsizes() method which allows for more accurate specification of datatypes, and additionally uses a dynamically sized, batched approach that scales to any number of columns and/or rows.

    The SQL Server fast_executemany parameter may be used at the same time as insertmanyvalues is enabled; however, the parameter will not be used in as many cases as INSERT statements that are invoked using Core Insert constructs as well as all ORM use no longer use the .executemany() DBAPI cursor method.

    The PyODBC driver includes support for a “fast executemany” mode of execution which greatly reduces round trips for a DBAPI executemany() call when using Microsoft ODBC drivers, for limited size batches that fit in memory. The feature is enabled by setting the attribute .fast_executemany on the DBAPI cursor when an executemany call is to be used. The SQLAlchemy PyODBC SQL Server dialect supports this parameter by passing the fast_executemany parameter to , when using the Microsoft ODBC driver only:

    1. engine = create_engine(
    2. "mssql+pyodbc://scott:tiger@mssql2017:1433/test?driver=ODBC+Driver+17+for+SQL+Server",

    New in version 1.3.

    See also

    fast executemany - on github

    Setinputsizes Support

    As of version 2.0, the pyodbc cursor.setinputsizes() method is used for all statement executions, except for cursor.executemany() calls when fast_executemany=True where it is not supported (assuming insertmanyvalues is kept enabled, “fastexecutemany” will not take place for INSERT statements in any case).

    The behavior of setinputsizes can be customized via the hook. See that method for usage examples.

    Changed in version 1.4.1: The pyodbc dialects will not use setinputsizes unless is passed.

    Changed in version 2.0: The mssql+pyodbc dialect now defaults to using setinputsizes for all statement executions with the exception of cursor.executemany() calls when fast_executemany=True.

    Support for the Microsoft SQL Server database via the pymssql driver.

    Connecting

    Connect String:

    1. mssql+pymssql://<username>:<password>@<freetds_name>/?charset=utf8

    pymssql is a Python module that provides a Python DBAPI interface around FreeTDS.

    Note

    pymssql is currently not included in SQLAlchemy’s continuous integration (CI) testing.