-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First pass at a Go-style select statement #13661
Conversation
@@ -0,0 +1,40 @@ | |||
function select_test(t1, t2, t3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add this to test/choosetests.jl
, and add license header
|
cc @amitmurthy |
Can this become generic enough to be able to wait on any waitable type ? Tasks, |
That seems like a good goal. But in @select if condition |> value
@show value
end what should Or that store-value select case type could be disallowed for non-channels, with only @select if condition
println("$condition finished first)"
end working. I think at this point, the only type of |
All waitables are supported now. |
Nice! I was wondering if the implementation would have been simpler if we had a way to kill a task. In that case we would not need the extra lock in Channel and pair of (waiters, evaluators) - the "winning" evaluator could just kill the other tasks.... |
A couple of issues I just realized.
|
My intention was the new lock field in channels would deal with 1)- nothing can As a bonus to that approach, channels will have to become lockable to be threadsafe anyways once the multithreaded branch becomes used, so it seemed a way of killing two birds with one stone. I talked with @vtjnash a while ago about the possibility of a task-killing function- he seemed reluctant about that idea, but I can't remember why now. Maybe he can chime in here. It does seem like that's the simplest approach unless there is a deep issue with killing tasks I'm not aware of. Definitely you're right about 2). |
ScenarioTask 1 -> Task 4 puts data into Both Task 2 and Task 3 are notified Task 2 is is run and it writes into edit: Now before Task 1 is run, Task 3 gets executed which removes data from Task 1 now tries a Cleaning up the waiting tasks immediately (point 2 above) will not be easy since they are are all waiting on the actual objects. Even with a longer We need a kill task - some discussion here - #6283 |
the presumed need to kill a task is probably a flaw in the API design here. it's hard to make specific comments without a more concrete proposal (aka Julep) to reference though. this sounds like it may be starting down the path of implementing the waitq idea from https://github.com/JuliaLang/julia/pull/7275/files#diff-121162110854f09c3be5b39209646009R220, so dropping this reference in as a cross link. |
But doen't the lock in https://github.com/JuliaLang/julia/pull/13661/files#diff-ab1c387bfae5c04f18ba106ec410c7f8R49 block Task 4 from removing data, given https://github.com/JuliaLang/julia/pull/13661/files#diff-ab1c387bfae5c04f18ba106ec410c7f8R214 acquired the lock earlier? Task 4 gets put back on the wait queue while it waits for the mutex, meanwhile Task 3 is run successfully, unlocks the mutex, and Task 4 gets scheduled again. Although that does reveal a different problem with the code as written, since once the mutex is unlocked and Task 4 is rescheduled will at that point try to take without checking again that data is still available, even though the channel is empty. The code could be changed to cope with that though. |
I meant Task 3 takes the data which was meant to be taken by Task 1 (the one with eval blocks in |
Also I have not clearly understood the need for the extra |
That lock is exactly intended to deal with the race condition you're bringing up. Let me explain it more clearly what I was going for: The waiter looks like
So once Now say at some point before the evaluator can call |
AFAIK
will be executed without yielding to other tasks in between. Notified tasks (due to the |
Oh, does it not yield? My bad. I guess it would need to explicitly yield and have some logic to ensure that it stays blocked until the evaluator is run. I looked at the waitq code - we could implement the same idea. Make just one task per condition, instead of a separate waiter and executor. Each task is listening for a special shutdown exception in a That is probably not super from a performance perspective, but it seems simple and robust. |
OK, here is take 2. A cleaner rewrite of the implementation using the task-killing strategy. Still needs docs, but otherwise I think it's ready. |
still needs docs, and preferably a implementation spec to describe what you are attempting to accomplish here. |
were you thinking that spec would be via a comment block at the top of the new code, or something like an external markdown document that I link to? |
I could create a Julep issue and link it to this PR. |
Just doing some cleanup of PRs that might be stale |
Progress:
Proposed syntax:
cf Go select construct grammar