Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/week01/practices.md
Original file line number Diff line number Diff line change
Expand Up @@ -272,7 +272,7 @@ get_rect_area(x1, y1, x2, y2)
```

Someone calling the function could easily make a mistake:
`get_rect_area(x1, x2, y1, y1)` for example. However, if you bundle this:
`get_rect_area(x1, x2, y1, y2)` for example. However, if you bundle this:

```python
def get_rect_area(point_1, point_2): ... # does stuff
Expand Down
13 changes: 7 additions & 6 deletions content/week11/piexample/asyncpi.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,22 +13,23 @@ async def timer():
print(f"Took {time.monotonic() - start:.3}s to run")


def pi_each(trials: int) -> None:
async def pi_async(trials: int) -> float:
Ncirc = 0
rand = random.Random()

for _ in range(trials):
for trial in range(trials):
x = rand.uniform(-1, 1)
y = rand.uniform(-1, 1)

if x * x + y * y <= 1:
Ncirc += 1

return 4.0 * (Ncirc / trials)

# yield to event loop every 1000 iterations
# This allows other asyncs to do their job
if trial % 1000 == 0:
await asyncio.sleep(0)

async def pi_async(trials: int):
return await asyncio.to_thread(pi_each, trials)
return 4.0 * (Ncirc / trials)


@timer()
Expand Down
45 changes: 45 additions & 0 deletions content/week11/piexample/asyncpi_thread.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
import contextlib
import random
import statistics
import threading
import time
import asyncio


@contextlib.asynccontextmanager
async def timer():
start = time.monotonic()
yield
print(f"Took {time.monotonic() - start:.3}s to run")


def pi_each(trials: int) -> None:
Ncirc = 0
rand = random.Random()

for _ in range(trials):
x = rand.uniform(-1, 1)
y = rand.uniform(-1, 1)

if x * x + y * y <= 1:
Ncirc += 1

return 4.0 * (Ncirc / trials)


async def pi_async(trials: int):
return await asyncio.to_thread(pi_each, trials)


@timer()
async def pi_all(trials: int, threads: int) -> float:
async with asyncio.TaskGroup() as tg:
tasks = [tg.create_task(pi_async(trials // threads)) for _ in range(threads)]
return statistics.mean(t.result() for t in tasks)


def pi(trials: int, threads: int) -> float:
return asyncio.run(pi_all(trials, threads))


print(f"{pi(10_000_000, 10)=}")
44 changes: 30 additions & 14 deletions content/week11/threading.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ memory. This is much heaver weight than threading, but can be used effectively
sometimes.

Recently, there have been two major attempts to improve access to multiple cores
in Python. Python 3.12 added a subinterpeters each with their own GIL; two pure
in Python. Python 3.12 added subinterpeters each with their own GIL; two pure
Python ways to access these are being added in Python 3.14 (previously there was
only a C API and third-party wrappers). Compiled extensions have to opt-into
supporting multiple interpreters.
Expand Down Expand Up @@ -442,7 +442,7 @@ something like a notebook.
Here's our π example. Since we don't have to communicate anything other than a
integer, it's trivial and reasonably performant, minus the start up time:

```{literalinclude} piexample/threadexec.py
```{literalinclude} piexample/procexec.py
:linenos:
:lineno-match: true
:lines: 15-
Expand All @@ -469,18 +469,15 @@ also making the context manager async:
:linenos:
```

Since the actual multithreading above comes from moving a function into threads,
it is identical to the threading examples when it comes to performance (same-ish
on normal Python, faster on free-threaded). The `async` part is about the
control flow. Outside of the `to_thread` part, we don't have to worry about
normal thread issues, like data races, thread safety, etc, as it's just oddly
written single threaded code. Every place you see `await`, that's where code
pauses, gives up control and lets the event loop (which is created by
`asyncio.run`, there are third party ones too) take control and "unpause" some
other waiting `async` function if it's ready. It's great for things that take
time, like IO. This is not as commonly used for threaded code like we've done,
but more for "reactive" programs that do something based on external input
(GUIs, networking, etc).
Every place you see `await`, that's where code pauses, gives up control and lets
the event loop (which is created by `asyncio.run`, there are third party ones
too) take control and "unpause" some other waiting `async` function if it's
ready.

You will notice no performance improvement over the single-threaded version of
the code, since the asyncio event loop runs on the main thread, and relies on
the async function to give up control so that other async functions can proceed,
like we've done using `asyncio.sleep()`.

Notice how we didn't need a special `queue` like in some of the other examples.
We could just create and loop over a normal list filled with tasks.
Expand All @@ -489,3 +486,22 @@ Also notice that these "async functions" are called and create the awaitable
object, so we didn't need any odd `(f, args)` syntax when making them, just the
normal `f(args)`. Every object you create that is awaitable should eventually be
awaited, Python will show a warning otherwise.

`async` is great for processing that takes time but shouldn't hog up all the
CPU. It is mostly used for "reactive" programs that do something based on
external input (GUIs, networking, etc).

It is also possible to run `async` code in a thread by awaiting on
`asyncio.to_thread(async_function, *args)`.

```{literalinclude} piexample/asyncpi_thread.py
:linenos:
```

Since the actual multithreading above comes from moving a function into threads,
it is identical to the threading examples when it comes to performance (same-ish
on normal Python, faster on free-threaded).

Outside of the `to_thread` part, we don't have to worry about normal thread
issues, like data races, thread safety, etc, as it's just oddly written single
threaded code.