Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should we deprecate trio.Event.clear? #637

Closed
njsmith opened this issue Aug 29, 2018 · 37 comments · Fixed by #1093
Closed

Should we deprecate trio.Event.clear? #637

njsmith opened this issue Aug 29, 2018 · 37 comments · Fixed by #1093

Comments

@njsmith
Copy link
Member

njsmith commented Aug 29, 2018

It occurs to me that I've seen 4 different pieces of code try to use Event.clear lately, and they were all buggy:

Now, maybe this isn't Event.clear's fault, but it makes me wonder :-). (And this is partly coming out of my general reassessment of the APIs we inherited from the stdlib threading module, see also #322 and #573.)

The core use case for Event is tracking whether an event has happened, and broadcasting that to an arbitrary number of listeners. For this purpose, clear isn't meaningful: once an event has happened, it can't unhappen. And if you stick to this core use case, Event seems very robust and difficult to mis-use.

All of the trouble above came when someone tried to use it for something outside of this core use case. Some of these patterns do make sense:

  • If you have a periodic event, you might want to have the semantics of "wait for the next event". That can be done with an Event, where waiters call await ev.wait() and wakers call ev.set(); ev.clear(). But it can also be done with a Condition or a ParkingLot, or we could have a PeriodicEvent type if it comes up enough... for a dedicated PeriodicEvent it might also make sense to have a close method of some kind to avoid race conditions at shutdown, where tasks call wait after the last event has happened and deadlock.

    • Another option in many cases is to model a periodic event by creating one Event object per period. This is nice because it allows you to have overlapping periods. For example, consider a batching API, where tasks submit requests, and then every once in a while they get gathered up and submitted together. The submitting tasks want to wait until their request has been submitted. One way to do it would be to have an Event for each submission period. When a batch is gathered up for submission, the Event gets replaced, but the old Event doesn't get set until after the submission finishes. Maybe this is a pattern we should be nudging people towards, because it's more general/powerful.
  • The websocket example above could be made correct by moving the clear so that it's right after the wait, and before the call that consumes the data (data = self._wsproto.bytes_to_send()). (It might be more complicated if the consuming call wasn't itself synchronous.) So ev.wait(); ev.clear() can make sense... IF we know there is exactly one task listening. Which is way outside Event's core use case. In this case, it's basically a way of "passing control" from one task to another, which is often a mistake anyway – Python already has a very easy way to express sequential control like this, just execute the two things in the same task :-). Here I think a Lock would be better in any case: WebSocketConnection should implement the trio.abc.AsyncResource interface trio-websocket#3

Are there any other use cases where Event.clear is really the right choice? Or where maybe it's not the right choice, but it's currently the best available choice?

@smurfix
Copy link
Contributor

smurfix commented Aug 29, 2018

Well … I'd start with adding that close-able PeriodicEvent, though we might want to select a better name (not all multi-events are "periodic"). Personally I'd use "signal" but that's taken. "Blinker"?

Then, deprecate Event.clear.

As to the websocket example: If you have a background task that sends data, shouldn't you use a queue?

@njsmith
Copy link
Member Author

njsmith commented Aug 29, 2018

@smurfix I'm not sure whether the PeriodicEvent use case actually comes up often enough to make it a new standard thing... I was thinking more that we should start keeping an eye out for it.

@njsmith
Copy link
Member Author

njsmith commented Aug 29, 2018

(Note that it wouldn't have been useful in any of the cases above, except maybe @belm0's and my impression is that he needs something more complex anyway.)

@miracle2k
Copy link
Contributor

I have been using Event.clear() (in the "right" way being discussed above) multiple times, but whenever I do, I get scared - does it really behave the way I expect? Can I trust that all waiters have been awoken before clear() goes into effect? I have to think it through and look at the source every time.

So I agree that a separate API for this use case might make things easier and clearer.

@njsmith
Copy link
Member Author

njsmith commented Aug 29, 2018

@miracle2k oh cool, tell us more :-). When you say "the right way", do you mean doing ev.set(); ev.clear() for a use case like my "periodic event" suggestion? (Though @smurfix is right we should have a name that doesn't imply the periods are equal length, maybe "repeated event".) Can you say more about your use case?

@belm0
Copy link
Member

belm0 commented Aug 29, 2018

I'm not sure having another special-purpose event is the right path in terms of ease of use and clear API.

A counter example: it was trivial to make an event supporting level and edge trigger with any polarity and a simple property interface for the value (sorry, impl omitted):

class BoolEvent:
    """Boolean offering the ability to wait for a value or transition."""

    def __init__(self, value=False): ...

    @property
    def value(self): ...

    @value.setter
    def value(self, x): ...

    async def wait_value(self, x=True):
        """Wait until given value."""
        ...

    async def wait_transition(self, x=True):
        """Wait until transition to given value."""
        ...

Using a property and not having the user bother about set/clear/get is very nice, as is making level and edge trigger very clear in the api and supporting all use cases an once.

Implementation uses 4 ParkingLot instances, but these can be allocated lazily, and obviously halved if you remove the polarity option.

And this same API can be used with arbitrary value types and predicates:

foo = ValueEvent(12.5)
...
await foo.wait_transition(lambda x: 50 < x <= 100)

@miracle2k
Copy link
Contributor

@njsmith Yes its the 'wait for the next event' usecase you already mentioned.

@njsmith
Copy link
Member Author

njsmith commented Aug 29, 2018

@miracle2k Do you have 1 waiter, or many? Is there data associated with the events? Do you have any trouble around shutting down?

@njsmith
Copy link
Member Author

njsmith commented Aug 30, 2018

Further observation: there's a simple theoretical reason that Event.clear specifically is error-prone. If Event.clear doesn't exist, then Event.wait guarantees a simple invariant: "the value is now True". But if you have Event.clear, then the invariant that Event.wait guarantees is: "the value was True at some point in the recent past, but at this point who knows"

[edited: was "if you have Event.set"]

@miracle2k
Copy link
Contributor

@njsmith Never had a any issue with shutting down. I think that is probably because my shutdowns, even if I want to controlled shutdown of a part of the task tree, always use trio's cancellation, so that should prevent wait() from deadlocking (I assume).

I just ran across a case where I used this approach: It is essentially one task communicating updates to a second task, but I am not using a queue, because if the second task is too busy to respond to all updates, I just want it to work with the latest update once it has time.

So the publisher does:

current_state = 42
state_available.set()

and the consumer does:

```python
while True:
    await state_available.wait()
    await state_available.clear()
    await process(current_state)

@smurfix
Copy link
Contributor

smurfix commented Aug 30, 2018

I just ran across a case where I used this approach: It is essentially one task communicating updates to a second task, but I am not using a queue, because if the second task is too busy to respond to all updates, I just want it to work with the latest update once it has time.

Hmm. I'd use a queue for that, adding a (synchronous) Queue.put_force method that drops the oldest element if the queue is full.

@njsmith
Copy link
Member Author

njsmith commented Aug 30, 2018

@miracle2k Ah interesting! So for this use case a pure wait-for-next-repeating-event isn't actually quite the same as what you're doing now. Specifically, consider the case where await process(...) takes so long that a new update is already published before it finishes. With you current design, the consumer will detect this and immediately start processing it without waiting. With a wait-for-next-repeating-event API, in this case it would ignore the new data and sit idle into new, new data became available.

This way of using an Event as a single consumer, is-there-work-to-do flag is actually pretty similar to how SignalReceiver uses it (#619). It's a bit different because SignalReceiver is like a queue with idiosyncratic rules (basically duplicate items get dropped), so we only clear after observing that the queue is empty. "Queue with special rule for coalescing in-flight objects" sounds like it covers both of these cases, and is kind of where @smurfix's mind is going too. Actually, a kind of queue where each item has a key and a value and we only saved the latest value for each key would be sufficient for both SignalReceiver (which would always use None for the value), and for @miracle2k's case (which would always use None for the key). I wonder if there are any cases that would use non-None keys and values simultaneously.

(@smurfix though on another note, I've also recently been pondering the possibility of have a put_force that ignores limits but doesn't drop any items – the motivating example is a work queue where you have items coming in from some external process, plus some work items generate new work items when processed, and you have some kind of rate limiting on processing items off of the queue. Here you want to apply backpressure to the external source, but not to the workers themselves, because that risks deadlocks. The relevance here is mostly to keep in mind that there might be multiple kinds of "forcing".)

Another direction to generalize @miracle2k's case would be to make it multi-consumer, like a value that broadcasts updates. The interesting thing here is that you want to track, for each consumer, which was the latest update that it saw. Maybe something like, calling handle.wait_new_value() returns a pair (new_value, new_handle), and then you process new_data and call new_handle.wait_new_value()? That would be simple to implement using the "one Event per period" model mentioned in my original post.

@miracle2k
Copy link
Contributor

miracle2k commented Sep 7, 2018

I was working on porting aiostream to trio, and run into another case where I wanted to use Event.clear(). The full patch is here (vxgmichel/aiostream@cd889bb), but to summarize, here is how the code works:

  • The class wraps a series of async iterables.
  • There is a generator (called completed), which returns the "next" item from the first of the wrapped generators.
  • Rather than just running all of the source generators at the same time, and adding the results to a queue, the class has it's own scheduling system, where it calls anext() on no more than $max_tasks source generators, and when it gets back a result, it schedules a possibly different source generator.
  • The asyncio version uses asyncio_wait(FIRST_COMPLETED) which in theory could return multiple of the anext() calls as "done" at the same time. The asyncio code processes multiple results arriving at the same time in a particular order. I wanted to be faithful to the original.

So in short:

  • We start_soon() multiple anext(some_generator).
  • We want to wait for one or more of them to return.
  • We then want to process those generator yields at the same time, and only then schedule more anext(some_generator) calls.

To be honest, I don't know if those details are really important, or are just a byproduct of the way asyncio works, but let's say they are, and I want to port it faithfully.

So my first thought was, let's use a queue:

  • We schedule $task_limit tasks, each calling anext on a source generator, adding the result to the queue.
  • We can then wait for the first item on the queue, but also check if there are more at the same time, and get_nowait() all of the results at the same time until the queue is empty - thus simulating wait_for(FIRST_COMPLETED) in the case that more than a single task is returned as done at the same time.

But:

  • In theory, all the tasks could be completed at the same time, and the number of tasks is dynamic. We would have to change the maxlen of the Queue at runtime, and trio does not seem to have a public API for this.

  • Getting all tasks of the queue at the same time, without yielding, is certainly possible, but also seems hacky/an abuse of the primitive?

I always ignore Conditions, so I tried to consider it. But we don't really need it - there is only one consumer. Even if there were more than one consumer, the asyncio version is written in such a way that the completed generator gives all the items of all the sources that are returned as done at the same time, to the same consumer.

So instead I opted for an event:

  • Whenever any of the start_soon(anext, some_generator) gets an item, it adds it to a simple array and set()s an event.

  • The completed() generator, if there are no items, waits for the event, copy's and emptys that array, fetching all the results for itself, then calls Event.clear().

It seems to work, but who knows? You guys probably do 😉

@njsmith
Copy link
Member Author

njsmith commented Jun 4, 2019

I was thinking about this again today, prompted by this talk. Starting around slide 67, @bcmills discusses the case of wanting to broadcast a "repeating transition", e.g. one task can toggle between idle and busy states, and another task wants to wait for the first task to enter its idle state. He has two solutions:

  • One is the one I described above, where you allocate a new Event for each period (slides 69-70).
  • The other was new to me: instead of a bool busy/idle, you basically use an integer that increments monotonically, where busy = odd, idle = even. He considers this the less elegant solution, and I guess I agree, but it's very clever :-). And it makes an unexpected connection between the repeating transition case discussed in this issue, and the BroadcastValue idea in Broadcast channels #987.

I also just read over @miracle2k's detailed post directly above this one, and it really feels to me like the solution is just... use a queue (or now, channel)? We have unbounded channels, so that takes care of the dynamic sizing issue, and for re-ordering the results, well, get_nowait is an option, but also the re-ordering idea seems unmotivated in the first place – asyncio's FIRST_COMPLETED loses some ordering information, but with a channel, whichever thing comes first in the channel is the one that completed first, full stop, so the problem being solved doesn't really exist.

I'm not really seeing any compelling use cases for Event.clear, and avoiding it seems to consistently produce better results.

@imrn
Copy link

imrn commented Jun 4, 2019 via email

@belm0
Copy link
Member

belm0 commented Jun 4, 2019

Summary of event patterns & usage in our app:

trio.Event is rarely used directly. When it is, it's either for single use, or else clear() immediately after wait(). Just today I fixed a race condition where there was an await call before the clear()-- it's an easy mistake. Last week I fixed a bug where Event was misused for queueing.

We use our own event API's heavily: BoolEvent and ValueEvent, which have no clear() method and support repeated level and edge triggering, dismissing anything which might happen between wait calls. (I've already outlined them in this thread, #637 (comment). Might open source later this year.) The wait methods use ParkingLot under the hood, so effectively it's a single-use event per wait (or group of waiters). Casual users of these API's probably miss that after a wait method returns control the value may have already changed again. For most use cases it doesn't matter.

ValueEvent allows level and edge triggering on arbitrary predicates. The interesting thing here is that, say for x >= 10, you may want to know what exact value of x satisfied the predicate. So the wait methods return the triggering value.

@smurfix
Copy link
Contributor

smurfix commented Jun 4, 2019 via email

@belm0
Copy link
Member

belm0 commented Jun 4, 2019

There is an ambiguity here. Will the first task that does this
wait/clear dance cause the other tasks to keep waiting until the next
set, or won't it?

Thank you. In our case trio.Event is only used between two tasks (no fan-out). For fan-out cases we use the higher level event classes which lack clear().

@tacaswell
Copy link

I have a use-case where we are using an Event as a "run permit" where we have one long running task that needs to be paused for some time and then restarted. Something like:

async def worker(run_permit, items):
    for work in items:
        await run_permit.wait()
        await do_work(work)


async def pauser(run_permit, delay):
    run_permit.clear()
    await sleep(delay)
    run_permit.set()

It would be a bit awkward (but not too bad) to inject new events into the worker.

I am not sure if this is really different than "tell worker(s) that work is ready" (vs "tell workers to chill for a bit").

@njsmith
Copy link
Member Author

njsmith commented Jun 4, 2019

@tacaswell I feel like a Lock might be more natural?

async def worker(run_permit, items):
    for work in items:
        async with run_permit:
            await do_work(work) 

async def pauser(run_permit, delay):
    async with run_permit:
        await sleep(delay)

It has the advantage that it's guaranteed to actually pause the worker, which the original doesn't (consider if do_work takes longer than delay).

@tacaswell
Copy link

That is fair, but I was cheating a bit in the example

async def worker(run_permit, items):
    for work in items:
        await run_permit.wait()
        await do_work(work)

async def pauser(run_permit):
    run_permit.clear()

async def resumer(run_permit)
    run_permit.set()

is closer to what we are actually doing and holding a lock not in a context block seems weirder to me than abusing Events this way. We also cancel (and then catch the exception) the worker task to (try modulo blocking sync operations) to guarantee that the worker actually pauses.

Having a class which owns a Lock, a flag, and the ability to mint new Event objects when the run permit is revoked would not be that hard to write.

@njsmith
Copy link
Member Author

njsmith commented Jun 5, 2019

@tacaswell I assume this is currently in asyncio, hence the reference to catching the cancelled exception?

The other thing that Lock or similar provide is that they would mean pauser actually waits for the worker to be paused before returning. Yay causality. I agree it looks a bit odd to have acquire and release calls in separate methods, but I'm not sure it's more odd-looking than using Event to fake something like a Lock :-)

@tacaswell
Copy link

@njsmith you have a talent for pulling the right threads to find the complexity in things ;)

Yes, in asyncio. We handle the causality we care about in other ways (as the causality we care about involves some cross-thread coordination, because of course it involves threads....).

I think a textual description of the use case is I have a long running iterative task that for "reasons" can't have backpressure applied to it (like a countdown timer for a rocket launch where the backpressure is a person yelling "hold" ;) ) directly via code. I want to ask on each pass through the loop "may I proceed, and if not tell me when I may" which seems like an Event.

On the other hand, a lock is "while I am running only I may touch this protected state", but there is no state to protect here.

I think the Event model scales better if I have many independent tasks that I want to be able to let spin independently, but then (temporarily) suspend all together (you could do this by flashing the lock at the top of every loop, but that also seems weird).

I think I see how to do this with a Lock, a bool, and a Condition in a not super weird way.

@xgid
Copy link

xgid commented Jun 6, 2019

I've been following this thread with great interest and I find the discussion really exciting! (yes I know, I'm somehow rare in my tastes 😄 )

I have a long running iterative task that for "reasons" can't have backpressure applied to it ... I want to ask on each pass through the loop "may I proceed, and if not tell me when I may"

@tacaswell Your description is exactly the definition of a... semaphore! I mean the real-life semaphores. I'm not sure how should such a semaphore be implemented in Trio, but the semaphore explanation in the @bcmills talk refered by @njsmith I think that fits quite well to the idea.

By contrary, for me an Event is best to be seen in software as "something that happened" and which you can respond to, but which is always in the past. That's the approach that's used in browsers, though the event names themselves and the wording in there are prone to create confusion (they called it onclick instead of clicked!).

So if you see an Event as always in the past and ephemeral in nature, you have no doubt that Event.clear makes no sense, haven't you?

HTH. Thinking in real-life examples makes me keep the models clean in my mind.

And now here is my question: how would you implement a "real-life semaphore" for the @tacaswell's use case in Trio?

@belm0
Copy link
Member

belm0 commented Jun 6, 2019 via email

@xgid
Copy link

xgid commented Jun 7, 2019

https://trio.readthedocs.io/en/latest/reference-core.html#trio.Semaphore

@belm0 Sure, thanks! Glad to know that Trio already comes with batteries charged and uses good names for the good things! 👍 (Maybe I should start reading documentation as "normal" people do... 😜 )

@tacaswell Do you think that Trio.Semaphore could be a good fit for your use case? If not, what "feature" do you find missing?

@belm0
Copy link
Member

belm0 commented Jun 7, 2019

I wrote:

trio.Event is rarely used directly [in our project]. When it is, it's either for single use, or else clear() immediately after wait()

I reviewed my project's usage of clear() again-- it's more complicated.

First, there are only 13 instances of clear() in 50k lines of async/await code. Most of the tempting cases are likely offloaded to the Bool/ValueEvent utilities I mentioned. It's still possible to make mistakes with those (e.g. trying to use wait_transition() to signal that data is ready for processing), but overall they are not as pointy-sharp as Event w/clear().

When we use clear(), it's always in the context of a single task being assigned to the wait-clear-wait role (i.e. no fan out). I've identified two use cases:

trigger processing of data consumed by a dedicated task (non-queueing)

The waiter must call clear() immediately after wait(). E.g.

while True:
    await data_ready_event.wait()
    data_ready_event.clear()
    await process_data()

If a blocking operation is mistakenly placed between wait() and clear(), then there is potential race where data is left unprocessed.

It's assumed that either we only need to queue one data item, or else data-overwrite semantics are acceptable. So a real queue seems overkill here as far as boilerplate, etc.-- but perhaps I can be convinced otherwise.

trigger execution of non-reentrant work scheduled by a dedicated task (non-queueing)

The waiter calls clear() immediately before wait(). E.g.

while True:
    await trigger_event.wait()
    await run_action()
    trigger_event.clear()

It assumes that we want to ignore signals while the non-reentrant action is running.

... so yes, if Event.clear() goes away I think I'll just make abstractions for these two use cases on top of wait queues (ParkingLot) or something.

@njsmith
Copy link
Member Author

njsmith commented Jun 7, 2019

A Lock() is essentially a Semaphore(initial_value=1, max_value=1). There are some edge case differences (trio.Condition only works with Locks, Locks can only be released by the same task that acquired them), but they're basically the same thing.

On the other hand, a lock is "while I am running only I may touch this protected state", but there is no state to protect here.

Another way to think of it is "there's a token that only one task can hold at a time, and grants me permission to run".

I think the Event model scales better if I have many independent tasks that I want to be able to let spin independently, but then (temporarily) suspend all together (you could do this by flashing the lock at the top of every loop, but that also seems weird).

Note that this case exactly match a classic reader-writer lock. ("Reader" really means "shared access" and "writer" means "exclusive access"; the names comes from a common use case but there's no obligation to actually read or write anything.)

@njsmith
Copy link
Member Author

njsmith commented Jun 7, 2019

@belm0 thanks for looking into that!

It's assumed that either we only need to queue one data item, or else data-overwrite semantics are acceptable. So a real queue seems overkill here as far as boilerplate, etc.-- but perhaps I can be convinced otherwise.

This sounds like a natural place to use a BroadcastValue (#987). I know your case has only one listener so you don't need "broadcast" per se, but if this was available as a primitive then your code would be value.set(obj) to set and async for obj in value.subscribe(): ... to consume, which is so minimal already that it doesn't seem worth trying to shrink it further.

(Actually, maybe we could even make it async for obj in value, where value.__aiter__() implicitly constructs an iterator to hold the necessary state.)

The waiter calls clear() immediately before wait().

This is exactly ParkingLot.park().

@belm0
Copy link
Member

belm0 commented Jun 7, 2019

Thank you-- I think that's enough ammunition to attempt to preemptively disallow Event.clear() in my project. The key insight is to use async generator abstractions.

This sounds like a natural place to use a BroadcastValue

I don't think our use cases need the "broadcast" or the "value", so the name is awkward, but the idea works. (Usually the value is a field in a local object, I'm not sure it's worth it to formally pass them around.)

Maybe:

data_ready_signal = BroadcastSignal()

async for _ in data_ready_signal:
    await process_data()

and

run_action_signal = NonReentrantSignal()

async for _ in run_action_signal:
    await run_action()

@tacaswell
Copy link

Note that this case exactly match a classic reader-writer lock.

Ah, that makes sense (and I was even looking at reader-writer locks in a slightly different context this week). I am relieved that the thing I want has been wanted in the past and has a name 👍 . Thanks for tolerating my ramblings.

@bcmills
Copy link

bcmills commented Jun 7, 2019

I can't speak to Trio in particular, but in most concurrent systems a clear / wait sequence is prone to racing. The usual race is:

  • Thread A completes an action.
  • Thread B enqueues more work and sets the “ready” event.
  • Thread A clears the “ready” event.

If I understand Trio's model correctly, then that race cannot occur here because there is not a schedule point between run_action and clear. However, that point is a bit subtle, and it implies that programs that use the Trio clear function cannot be safely ported to other, superficially-similar environments.

@belm0
Copy link
Member

belm0 commented Jun 7, 2019

If I understand Trio's model correctly, then that race cannot occur here because there is not a schedule point between run_action and clear.

Yes. For Python, single-threaded concurrency is a sweet spot due to the GIL. I have a large program with complex concurrency, yet no locks. We haven't lost any time to debugging concurrent memory access or race conditions.

However, that point is a bit subtle, and it implies that programs that use the Trio clear function cannot be safely ported to other, superficially-similar environments.

It depends. Kotlin, for example, has structured concurrency with flexible controls on how coroutines are mapped to threads. It allows single-threading like Trio. Then combine the simplicity of single-thread concurrency with scale of many threads / processes / CPU's / machines via message passing.

@belm0
Copy link
Member

belm0 commented Jun 10, 2019

Especially after looking at our project's two use cases of repeated Events and abstracting them, the existence of clear() seems harmful. Too many ways to abuse it, and too hard to review code for correctness. 👍 👍 for removal.

Regarding the abstractions, I call them "repeated events". Not great, but reflects the historical usage and the fact that they keep Event's set() method. Listener uses async iterator on the event object for its loop.

There is UnqueuedRepeatedEvent, which doesn't queue set(). It seems harmless enough when just used to trigger action in a remote task (assuming no data is passed).

The other is MailboxRepeatedEvent, which queues up to 1 set(). Data is passed out of band. It seems OK for signaling changes to collections, while use of other types of data is subject to overrun. For the latter case, use of this class is discouraged in favor of a memory channel (i.e. in-band data and back pressure).

njsmith added a commit to njsmith/trio that referenced this issue Jun 10, 2019
@njsmith
Copy link
Member Author

njsmith commented Jun 10, 2019

Looking for a review in #1093

@njsmith
Copy link
Member Author

njsmith commented Jul 28, 2020

Further evidence that everyone gets Event.clear wrong:

Matthias Urlichs (@smurfix) says:

Half a day burned chasing down https://gitlab.com/pgjones/hypercorn/-/issues/144 – oh the joy of writing async code.
­

Turns out this was a classic Event.clear race condition that shipped in hypercorn for some time.

@smurfix
Copy link
Contributor

smurfix commented Jul 28, 2020

Well, if that particular code had replaced the event with a new, cleared one, it would still have the exact same race condition …

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants