High Availability and PyMongo

    PyMongo makes working with replica sets easy. Here we’ll launch a new replica set and show how to handle both initialization and normal connections with PyMongo.

    See also

    The MongoDB documentation on

    The main replica set documentation contains extensive information about setting up a new replica set or migrating an existing MongoDB setup, be sure to check that out. Here, we’ll just do the bare minimum to get a three node replica set setup locally.

    Warning

    Replica sets should always use multiple nodes in production - putting all set members on the same physical node is only recommended for testing and development.

    We start three processes, each on a different port and with a different dbpath, but all using the same replica set name “foo”.

    1. $ mongod --port 27018 --dbpath /data/db1 --replSet foo
    1. $ mongod --port 27019 --dbpath /data/db2 --replSet foo

    At this point all of our nodes are up and running, but the set has yet to be initialized. Until the set is initialized no node will become the primary, and things are essentially “offline”.

    To initialize the set we need to connect to a single node and run the initiate command:

    1. >>> from pymongo import MongoClient
    2. >>> c = MongoClient('localhost', 27017)

    Note

    We could have connected to any of the other nodes instead, but only the node we initiate from is allowed to contain any initial data.

    After connecting, we run the initiate command to get things started:

    1. >>> config = {'_id': 'foo', 'members': [
    2. ... {'_id': 0, 'host': 'localhost:27017'},
    3. ... {'_id': 1, 'host': 'localhost:27018'},
    4. ... {'_id': 2, 'host': 'localhost:27019'}]}
    5. >>> c.admin.command("replSetInitiate", config)
    6. {'ok': 1.0, ...}

    The three mongod servers we started earlier will now coordinate and come online as a replica set.

    The initial connection as made above is a special case for an uninitialized replica set. Normally we’ll want to connect differently. A connection to a replica set can be made using the constructor, specifying one or more members of the set, along with the replica set name. Any of the following connects to the replica set we just created:

    1. >>> MongoClient('localhost', replicaset='foo')
    2. MongoClient(host=['localhost:27017'], replicaset='foo', ...)
    3. >>> MongoClient('localhost:27018', replicaset='foo')
    4. MongoClient(['localhost:27018'], replicaset='foo', ...)
    5. >>> MongoClient('localhost', 27019, replicaset='foo')
    6. MongoClient(['localhost:27019'], replicaset='foo', ...)
    7. MongoClient(['localhost:27017', 'localhost:27018'], replicaset='foo', ...)

    The addresses passed to MongoClient() are called the seeds. As long as at least one of the seeds is online, MongoClient discovers all the members in the replica set, and determines which is the current primary and which are secondaries or arbiters. Each seed must be the address of a single mongod. Multihomed and round robin DNS addresses are not supported.

    You need not wait for replica set discovery in your application, however. If you need to do any operation with a MongoClient, such as a or an insert_one(), the client waits to discover a suitable member before it attempts the operation.

    When a failover occurs, PyMongo will automatically attempt to find the new primary node and perform subsequent operations on that node. This can’t happen completely transparently, however. Here we’ll perform an example failover to illustrate how everything behaves. First, we’ll connect to the replica set and perform a couple of basic operations:

    1. >>> db = MongoClient("localhost", replicaSet='foo').test
    2. >>> db.test.insert_one({"x": 1}).inserted_id
    3. ObjectId('...')
    4. >>> db.test.find_one()

    By checking the host and port, we can see that we’re connected to localhost:27017, which is the current primary:

    1. >>> db.client.address
    2. ('localhost', 27017)

    Now let’s bring down that node and see what happens when we run our query again:

    1. >>> db.test.find_one()
    2. Traceback (most recent call last):
    3. pymongo.errors.AutoReconnect: ...

    We get an exception. This means that the driver was not able to connect to the old primary (which makes sense, as we killed the server), but that it will attempt to automatically reconnect on subsequent operations. When this exception is raised our application code needs to decide whether to retry the operation or to simply continue, accepting the fact that the operation might have failed.

    On subsequent attempts to run the query we might continue to see this exception. Eventually, however, the replica set will failover and elect a new primary (this should take no more than a couple of seconds in general). At that point the driver will connect to the new primary and the operation will succeed:

    1. >>> db.test.find_one()
    2. {u'x': 1, u'_id': ObjectId('...')}
    3. >>> db.client.address
    4. ('localhost', 27018)

    Bring the former primary back up. It will rejoin the set as a secondary. Now we can move to the next section: distributing reads to secondaries.

    By default an instance of MongoClient sends queries to the primary member of the replica set. To use secondaries for queries we have to change the read preference:

    1. >>> client = MongoClient(
    2. ... 'localhost:27017',
    3. ... replicaSet='foo',
    4. ... readPreference='secondaryPreferred')
    5. >>> client.read_preference
    6. SecondaryPreferred(tag_sets=None)

    Now all queries will be sent to the secondary members of the set. If there are no secondary members the primary will be used as a fallback. If you have queries you would prefer to never send to the primary you can specify that using the secondary read preference.

    By default the read preference of a Database is inherited from its MongoClient, and the read preference of a is inherited from its Database. To use a different read preference use the get_database() method, or the method:

    You can also change the read preference of an existing Collection with the method:

    1. >>> coll2 = coll.with_options(read_preference=ReadPreference.NEAREST)
    2. >>> coll.read_preference
    3. Primary()
    4. Nearest(tag_sets=None)

    Note that since most database commands can only be sent to the primary of a replica set, the command() method does not obey the Database’s , but you can pass an explicit read preference to the method:

    1. >>> db.command('dbstats', read_preference=ReadPreference.NEAREST)
    2. {...}

    Reads are configured using three options: read preference, tag sets, and local threshold.

    Read preference:

    Read preference is configured using one of the classes from read_preferences (, PrimaryPreferred, , SecondaryPreferred, or ). For convenience, we also provide ReadPreference with the following attributes:

    • : Read from the primary. This is the default read preference, and provides the strongest consistency. If no primary is available, raise .
    • PRIMARY_PREFERRED: Read from the primary if available, otherwise read from a secondary.
    • SECONDARY: Read from a secondary. If no matching secondary is available, raise AutoReconnect.
    • SECONDARY_PREFERRED: Read from a secondary if available, otherwise from the primary.
    • NEAREST: Read from any available member.

    Tag sets:

    1. >>> from pymongo.read_preferences import Secondary
    2. >>> db = client.get_database(
    3. ... 'test', read_preference=Secondary([{'dc': 'ny'}, {'dc': 'sf'}]))
    4. >>> db.read_preference
    5. Secondary(tag_sets=[{'dc': 'ny'}, {'dc': 'sf'}])

    MongoClient tries to find secondaries in New York, then San Francisco, and raises if none are available. As an additional fallback, specify a final, empty tag set, {}, which means “read from any member that matches the mode, ignoring tags.”

    See read_preferences for more information.

    Local threshold:

    If multiple members match the read preference and tag sets, PyMongo reads from among the nearest members, chosen according to ping time. By default, only members whose ping times are within 15 milliseconds of the nearest are used for queries. You can choose to distribute reads among members with higher latencies by setting localThresholdMS to a larger number:

    1. >>> client = pymongo.MongoClient(
    2. ... replicaSet='repl0',
    3. ... readPreference='secondaryPreferred',
    4. ... localThresholdMS=35)

    In this case, PyMongo distributes reads among matching members within 35 milliseconds of the closest member’s ping time.

    Note

    localThresholdMS is ignored when talking to a replica set through a mongos. The equivalent is the command line option.

    Health Monitoring

    When MongoClient is initialized it launches background threads to monitor the replica set for changes in:

    • Health: detect when a member goes down or comes up, or if a different member becomes primary
    • Configuration: detect when members are added or removed, and detect changes in members’ tags
    • Latency: track a moving average of each member’s ping time

    Replica-set monitoring ensures queries are continually routed to the proper members as the state of the replica set changes.

    mongos Load Balancing

    An instance of MongoClient can be configured with a list of addresses of mongos servers:

    1. >>> client = MongoClient('mongodb://host1,host2,host3')

    Each member of the list must be a single mongos server. Multihomed and round robin DNS addresses are not supported. The client continuously monitors all the mongoses’ availability, and its network latency to each.

    PyMongo distributes operations evenly among the set of mongoses within its localThresholdMS (similar to how it in a replica set). By default the threshold is 15 ms.

    The lowest-latency server, and all servers with latencies no more than localThresholdMS beyond the lowest-latency server’s, receive operations equally. For example, if we have three mongoses:

    By default the localThresholdMS is 15 ms, so PyMongo uses host1 and host2 evenly. It uses host1 because its network latency to the driver is shortest. It uses host2 because its latency is within 15 ms of the lowest-latency server’s. But it excuses host3: host3 is 20ms beyond the lowest-latency server.

    If we set localThresholdMS to 30 ms all servers are within the threshold:

    Warning