@@ -41,7 +41,7 @@ memory. This is much heaver weight than threading, but can be used effectively
4141sometimes.
4242
4343Recently, there have been two major attempts to improve access to multiple cores
44- in Python. Python 3.12 added a subinterpeters each with their own GIL; two pure
44+ in Python. Python 3.12 added subinterpeters each with their own GIL; two pure
4545Python ways to access these are being added in Python 3.14 (previously there was
4646only a C API and third-party wrappers). Compiled extensions have to opt-into
4747supporting multiple interpreters.
@@ -442,7 +442,7 @@ something like a notebook.
442442Here's our π example. Since we don't have to communicate anything other than a
443443integer, it's trivial and reasonably performant, minus the start up time:
444444
445- ``` {literalinclude} piexample/threadexec .py
445+ ``` {literalinclude} piexample/procexec .py
446446:linenos:
447447:lineno- match: true
448448:lines: 15 -
@@ -469,18 +469,15 @@ also making the context manager async:
469469:linenos:
470470```
471471
472- Since the actual multithreading above comes from moving a function into threads,
473- it is identical to the threading examples when it comes to performance (same-ish
474- on normal Python, faster on free-threaded). The ` async ` part is about the
475- control flow. Outside of the ` to_thread ` part, we don't have to worry about
476- normal thread issues, like data races, thread safety, etc, as it's just oddly
477- written single threaded code. Every place you see ` await ` , that's where code
478- pauses, gives up control and lets the event loop (which is created by
479- ` asyncio.run ` , there are third party ones too) take control and "unpause" some
480- other waiting ` async ` function if it's ready. It's great for things that take
481- time, like IO. This is not as commonly used for threaded code like we've done,
482- but more for "reactive" programs that do something based on external input
483- (GUIs, networking, etc).
472+ Every place you see ` await ` , that's where code pauses, gives up control and lets
473+ the event loop (which is created by ` asyncio.run ` , there are third party ones
474+ too) take control and "unpause" some other waiting ` async ` function if it's
475+ ready.
476+
477+ You will notice no performance improvement over the single-threaded version of
478+ the code, since the asyncio event loop runs on the main thread, and relies on
479+ the async function to give up control so that other async functions can proceed,
480+ like we've done using ` asyncio.sleep() ` .
484481
485482Notice how we didn't need a special ` queue ` like in some of the other examples.
486483We could just create and loop over a normal list filled with tasks.
@@ -489,3 +486,22 @@ Also notice that these "async functions" are called and create the awaitable
489486object, so we didn't need any odd ` (f, args) ` syntax when making them, just the
490487normal ` f(args) ` . Every object you create that is awaitable should eventually be
491488awaited, Python will show a warning otherwise.
489+
490+ ` async ` is great for processing that takes time but shouldn't hog up all the
491+ CPU. It is mostly used for "reactive" programs that do something based on
492+ external input (GUIs, networking, etc).
493+
494+ It is also possible to run ` async ` code in a thread by awaiting on
495+ ` asyncio.to_thread(async_function, *args) ` .
496+
497+ ``` {literalinclude} piexample/asyncpi_thread.py
498+ :linenos:
499+ ```
500+
501+ Since the actual multithreading above comes from moving a function into threads,
502+ it is identical to the threading examples when it comes to performance (same-ish
503+ on normal Python, faster on free-threaded).
504+
505+ Outside of the ` to_thread ` part, we don't have to worry about normal thread
506+ issues, like data races, thread safety, etc, as it's just oddly written single
507+ threaded code.
0 commit comments