18.5.9. Develop with asyncio
The implementation of asyncio has been written for performance. In order to ease the development of asynchronous code, you may wish to enable debug mode.
To enable all debug checks for an application:
Enable the asyncio debug mode globally by setting the environment variable to , or by calling AbstractEventLoop.set_debug().
Set the log level of the to
logging.DEBUG
. For example, calllogging.basicConfig(level=logging.DEBUG)
at startup.Configure the warnings module to display warnings. For example, use the
-Wdefault
command line option of Python to display them.
Examples debug checks:
and call_at() methods raise an exception if they are called from the wrong thread.
Log the execution time of the selector
Log callbacks taking more than 100 ms to be executed. The
AbstractEventLoop.slow_callback_duration
attribute is the minimum duration in seconds of “slow” callbacks.warnings are emitted when transports and event loops are not closed explicitly.
参见
The method and the asyncio logger.
18.5.9.2. Cancellation
Cancellation of tasks is not common in classic programming. In asynchronous programming, not only is it something common, but you have to prepare your code to handle it.
Futures and tasks can be cancelled explicitly with their Future.cancel() method. The function cancels the waited task when the timeout occurs. There are many other cases where a task can be cancelled indirectly.
Don’t call set_result() or method of Future if the future is cancelled: it would fail with an exception. For example, write:
Don’t schedule directly a call to the or the set_exception() method of a future with : the future can be cancelled before its method is called.
If you wait for a future, you should check early if the future was cancelled to avoid useless operations. Example:
@coroutine
def slow_operation(fut):
if fut.cancelled():
return
# ... slow computation ...
yield from fut
# ...
The shield() function can also be used to ignore cancellation.
18.5.9.3. Concurrency and multithreading
An event loop runs in a thread and executes all callbacks and tasks in the same thread. While a task is running in the event loop, no other task is running in the same thread. But when the task uses yield from
, the task is suspended and the event loop executes the next task.
loop.call_soon_threadsafe(callback, *args)
Most asyncio objects are not thread safe. You should only worry if you access objects outside the event loop. For example, to cancel a future, don’t call directly its Future.cancel() method, but:
loop.call_soon_threadsafe(fut.cancel)
为了能够处理信号和执行子进程,事件循环必须运行于主线程中。
To schedule a coroutine object from a different thread, the function should be used. It returns a concurrent.futures.Future to access the result:
future = asyncio.run_coroutine_threadsafe(coro_func(), loop)
result = future.result(timeout) # Wait for the result with a timeout
The method can be used with a thread pool executor to execute a callback in different thread to not block the thread of the event loop.
参见
The Synchronization primitives section describes ways to synchronize tasks.
The section lists asyncio limitations to run subprocesses from different threads.
Blocking functions should not be called directly. For example, if a function blocks for 1 second, other tasks are delayed by 1 second which can have an important impact on reactivity.
For networking and subprocesses, the asyncio module provides high-level APIs like .
An executor can be used to run a task in a different thread or even in a different process, to not block the thread of the event loop. See the AbstractEventLoop.run_in_executor() method.
参见
The section details how the event loop handles time.
18.5.9.5. 日志记录
The module logs information with the logging module in the logger 'asyncio'
.
The default log level for the module is logging.INFO
. For those not wanting such verbosity from asyncio the log level can be changed. For example, to change the level to logging.WARNING
:
logging.getLogger('asyncio').setLevel(logging.WARNING)
18.5.9.6. Detect coroutine objects never scheduled
When a coroutine function is called and its result is not passed to ensure_future() or to the method, the execution of the coroutine object will never be scheduled which is probably a bug. Enable the debug mode of asyncio to to detect it.
Example with the bug:
调试模式的输出:
Coroutine test() at test.py:3 was never yielded from
Coroutine object created at (most recent call last):
File "test.py", line 7, in <module>
test()
The fix is to call the ensure_future() function or the method with the coroutine object.
参见
Example of unhandled exception:
import asyncio
@asyncio.coroutine
def bug():
raise Exception("not consumed")
loop = asyncio.get_event_loop()
asyncio.ensure_future(bug())
loop.run_forever()
loop.close()
输出:
Task exception was never retrieved
Traceback (most recent call last):
File "asyncio/tasks.py", line 237, in _step
result = next(coro)
File "asyncio/coroutines.py", line 141, in coro
res = func(*args, **kw)
File "test.py", line 5, in bug
raise Exception("not consumed")
Exception: not consumed
to get the traceback where the task was created. Output in debug mode:
Task exception was never retrieved
source_traceback: Object created at (most recent call last):
File "test.py", line 8, in <module>
asyncio.ensure_future(bug())
Traceback (most recent call last):
File "asyncio/tasks.py", line 237, in _step
result = next(coro)
File "asyncio/coroutines.py", line 79, in __next__
return next(self.gen)
File "asyncio/coroutines.py", line 141, in coro
res = func(*args, **kw)
File "test.py", line 5, in bug
raise Exception("not consumed")
Exception: not consumed
There are different options to fix this issue. The first option is to chain the coroutine in another coroutine and use classic try/except:
@asyncio.coroutine
def handle_exception():
try:
yield from bug()
except Exception:
print("exception consumed")
loop = asyncio.get_event_loop()
asyncio.ensure_future(handle_exception())
loop.run_forever()
loop.close()
Another option is to use the AbstractEventLoop.run_until_complete() function:
参见
The method.
18.5.9.8. Chain coroutines correctly
When a coroutine function calls other coroutine functions and tasks, they should be chained explicitly with yield from
. Otherwise, the execution is not guaranteed to be sequential.
Example with different bugs using to simulate slow operations:
import asyncio
@asyncio.coroutine
def create():
yield from asyncio.sleep(3.0)
print("(1) create file")
@asyncio.coroutine
def write():
yield from asyncio.sleep(1.0)
print("(2) write into file")
@asyncio.coroutine
def close():
print("(3) close file")
@asyncio.coroutine
def test():
asyncio.ensure_future(create())
asyncio.ensure_future(write())
yield from asyncio.sleep(2.0)
loop.stop()
loop = asyncio.get_event_loop()
asyncio.ensure_future(test())
loop.run_forever()
print("Pending tasks at exit: %s" % asyncio.Task.all_tasks(loop))
loop.close()
Expected output:
(1) create file
(2) write into file
(3) close file
Pending tasks at exit: set()
Actual output:
(3) close file
(2) write into file
Pending tasks at exit: {<Task pending create() at test.py:7 wait_for=<Future pending cb=[Task._wakeup()]>>}
Task was destroyed but it is pending!
task: <Task pending create() done at test.py:5 wait_for=<Future pending cb=[Task._wakeup()]>>
The loop stopped before the create()
finished, close()
has been called before write()
, whereas coroutine functions were called in this order: create()
, write()
, close()
.
To fix the example, tasks must be marked with yield from
:
@asyncio.coroutine
def test():
yield from asyncio.ensure_future(create())
yield from asyncio.ensure_future(write())
yield from asyncio.ensure_future(close())
yield from asyncio.sleep(2.0)
loop.stop()
Or without asyncio.ensure_future()
:
@asyncio.coroutine
def test():
yield from create()
yield from write()
yield from close()
yield from asyncio.sleep(2.0)
loop.stop()
18.5.9.9. Pending task destroyed
If a pending task is destroyed, the execution of its wrapped did not complete. It is probably a bug and so a warning is logged.
Example of log:
Enable the debug mode of asyncio to get the traceback where the task was created. Example of log in debug mode:
Task was destroyed but it is pending!
source_traceback: Object created at (most recent call last):
File "test.py", line 15, in <module>
task = asyncio.ensure_future(coro, loop=loop)
task: <Task pending coro=<kill_me() done, defined at test.py:5> wait_for=<Future pending cb=[Task._wakeup()] created at test.py:7> created at test.py:15>
参见
.
When a transport is no more needed, call its close()
method to release resources. Event loops must also be closed explicitly.