-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New low-level proxying C API #15737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New low-level proxying C API #15737
Conversation
cc @kripken, @sbc, @juj. This code builds, but is completely untested. Initial comments on the API and implementation are welcome. I plan to add a test of the public API, but I would like to add unit tests for the various internal components of the implementation as well, specifically the logic for growing and allocating new task queues. Does anyone have a good idea of how to do that? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking very nice!
I wonder if this is good time to try to introduce the idea of internal headers to emscripten so that this could be landed without any user having access to it?
We could then build higher level stuff on top of it and polish it before making it public. Might be as simple as putting the header alongside the source code.
Of course it would still be good to be able to write tests against it so the test code might have the jump through some hoops to get this header on its include path.. but I think that is OK.
system/include/emscripten/proxying.h
Outdated
// nonblocking and safe to run at any time, similar to a native signal handler. | ||
em_proxying_queue* emscripten_proxy_get_system_queue(); | ||
|
||
// Execute all the tasks enqueued for the current thread on the given queue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe say something about what happens if new tasks are queued while running the tasks.
system/include/emscripten/proxying.h
Outdated
int emscripten_proxy_async(em_proxying_queue* q, | ||
pthread_t target_thread, | ||
void (*func)(void*), | ||
void* arg); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if I want to be notified when the work is done? I guess I would use a condition variable at a higher level?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's what I would expect.
system/lib/pthread/proxying.c
Outdated
if (task_queue_full(tasks)) { | ||
// Allocate a larger task queue. | ||
int new_capacity = tasks->capacity * 2; | ||
task* new_tasks = malloc(sizeof(task) * new_capacity); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use realloc here to avoid malloc + memcpy + free pattern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but only for the case where the queue doesn't wrap around the end of the buffer. Since we need to malloc + memcpy + free on one code path anyway, it's simpler to just do it for both paths and share all the code except for the memcpy.
system/lib/pthread/proxying.c
Outdated
static int task_queue_enqueue(task_queue* tasks, task t) { | ||
if (task_queue_full(tasks)) { | ||
// Allocate a larger task queue. | ||
int new_capacity = tasks->capacity * 2; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you split out another function here:
// Attempt to double the size of the queue
int new_capacity = tasks->capacity * 2;
if (!task_queue_grow(tasks, new_capacity)
return 0;
system/lib/pthread/proxying.c
Outdated
int tail; | ||
} task_queue; | ||
|
||
static int task_queue_empty(task_queue* tasks) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about adding // Pre-condition: em_task_queue mutex is held by calling thread
here and in the following two functions.
Oh I see you have used the // Not thread safe.
convention below.. maybe use there here too?
system/lib/pthread/proxying.c
Outdated
if (q == NULL) { | ||
return NULL; | ||
} | ||
pthread_mutex_init(&q->mutex, NULL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
q->mutex = PTHREAD_MUTEX_INITIALIZER
works too and I think is a little more readable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case we can simplify this initialization to be just a single expression.
assert(q != NULL); | ||
task_queue* tasks = NULL; | ||
for (int i = 0; i < q->size; i++) { | ||
if (pthread_equal(q->task_queues[i].thread, thread)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not use TLS here to avoid the linear search? I.e. can't task_queues
be accessed by pthread_get_specific
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would work for looking up the queue for the current thread, but is there a way to make that work for looking up the queue for another thread? I guess we could have both TLS and the array, but that seems more complicated for unclear gain. I expect the linear search to very fast anyway, since I don't imagine there would ever be more than a small handful of target threads.
system/lib/pthread/proxying.c
Outdated
if (pthread_equal(thread, EM_CALLBACK_THREAD_CONTEXT_CALLING_THREAD)) { | ||
return pthread_self(); | ||
} | ||
return thread; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed here? Maybe we can limit the use of these macros to the high layers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm slightly worried that this will be a footgun for users, but I'm not sure how often these macros are used in the first place. I think you're right that keeping these out of the low-level API makes sense. FWIW, I decided against using EMSCRIPTEN_RESULT_{SUCCESS,FAILURE}
here for largely the same reason; those macros also come from html5.h and seemed better suited for higher-level APIs.
system/lib/pthread/proxying.c
Outdated
pthread_t target_thread, | ||
void (*func)(void*), | ||
void* arg) { | ||
if (q == NULL) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not declare this undefined behaviour and just assert(q)
here? (I think this is what musl does everywhere else for example pthread_join(0)
of fread
from the NULL file handle.
return 0; | ||
} | ||
int empty = task_queue_empty(tasks); | ||
if (!task_queue_enqueue(tasks, (task){func, arg})) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
goto failed
and put the unlock and return at the end of the function?
Implement a new proxying API that is meant to be suitable both for proxying in syscall implementations as well as for proxying arbitrary user work. Since the system proxying queue is processed from so many locations, it is dangerous to use it for arbitrary work that might take a lock or otherwise block and the work sent to it has to be structured more like native signal handlers. To avoid this limitation, the new API allows users to create their own proxying queues that are processed only when the target thread returns to the JS event loop or when the user explicitly requests processing. In contrast to the existing proxying API, this new API: - Never drops tasks (except in the case of allocation failure). It grows the task queues as necessary instead. - Does not attempt to dynamically type or dispatch queued functions, but rather uses statically typed function pointers that take `void*` arguments. This simplifies both the API and implementation. Packing of varargs into dynamically typed function wrappers could easily be layered on top of this API. - Is less redundant. There is only one way to proxy work synchronously or asynchronously to any thread. - Is more general. It allows waiting for a task to be explicitly signaled as done in addition to waiting for the proxied function to return. - Uses arrays instead of linked lists for better data locality. - Has a more uniform naming convention. A follow-up PR will reimplement the existing proxying API in terms of this new API.
ffc18e7
to
be3de8f
Compare
@sbc100 I've addressed all your comments, done some additional functional decomposition, and added a test that shows that proxying and the queue growth all works as expected. The next step will be to get the |
This is ready for proper review now. The test covers the full API, both when the target thread is explicitly executing queued tasks and when the target thread is idling on the JS event loop. Additionally, the tests exercise the logic for adding new task queues (even while actively executing tasks, which used to be a buggy case) and for growing individual task queues. |
if (task_buffer == NULL) { | ||
return 0; | ||
} | ||
*tasks = (task_queue){.thread = thread, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need the (task_queue)
cast here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, otherwise I get an error: expected expression
. This is a necessary part of the compound literal syntax.
} | ||
|
||
// Not thread safe. | ||
static task task_queue_dequeue(task_queue* tasks) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need a check here for queue being empty? Or is that a precondition at all the call sites?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a precondition at the call site. I'll add that to the comment.
if (q == NULL) { | ||
return NULL; | ||
} | ||
*q = (em_proxying_queue){.mutex = PTHREAD_MUTEX_INITIALIZER, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cast needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is another compound literal expression.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting.. I don't remember using the casts when I've used that syntax in the past... seems like the compiler should be able know.
Like for example in this case there is no cast needed:
typedef struct {
int a;
int b;
} Foo;
Foo f = {.a = 1, .b = 2};
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that should be simple type inference since C doesn't have any sort of struct subtyping. I'm not sure what the rationale for requiring the type to be repeated is.
self.set_setting('EXIT_RUNTIME') | ||
self.set_setting('PROXY_TO_PTHREAD') | ||
self.set_setting('PTHREAD_POOL_SIZE=3') | ||
self.set_setting('INITIAL_MEMORY=32mb') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need 32mb here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need something larger than the default, and I arbitrarily chose 32mb as something that worked. It seemed fishy that the test was using so much memory, but instrumenting dlmalloc
to print allocation sizes and stack traces didn't produce anything unexpected. By far the largest allocations are for the the pthread stacks, but those are all properly freed once the threads are joined AFAICT.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out default pthread stack size is 2mb, which is way to large IMHO, but also you only have 3 threads here so I can't see how that would require this much memory. lgtm either to this though.. trying to reduce memory overhead is not what this change is about.
tests/test_core.py
Outdated
self.set_setting('PROXY_TO_PTHREAD') | ||
self.set_setting('PTHREAD_POOL_SIZE=3') | ||
self.set_setting('INITIAL_MEMORY=32mb') | ||
self.set_setting('EXPORTED_FUNCTIONS=_emscripten_proxy_execute_queue,_main') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you mark _emscripten_proxy_execute_queue
as EMSCRIPTEN_KEEPALIVE
I think it will be exported if and only if proxying.o is included, which seems like what you want (then you can delete this line I think).
Perhaps add a comment saying Exporting so it can be called by worker.js
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will make that minimal change, but I wonder whether the other functions should be similarly exported since they're designed to be called on arbitrary threads as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "exporting" we are talking about is only needed for the case when external JS code (e.g. user code, or worker.js) needs to call a function directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see. Should we add it to the list here then? https://github.com/emscripten-core/emscripten/blob/main/emcc.py#L1971-L1982
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could.. but adding EMSCRIPTEN_KEEPALIVE
the the source code would allow it to be only exported if the object file is used.. which would avoid paying the cost in all pthread-based programs.
Even better you can do if hasExportedFunction('_emscripten_proxy_execute_queue')
in the library code (and the worker.js code) to avoid included the message handlers unless they are needed.
tests/test_core.py
Outdated
def test_pthread_proxying(self): | ||
self.set_setting('EXIT_RUNTIME') | ||
self.set_setting('PROXY_TO_PTHREAD') | ||
self.set_setting('PTHREAD_POOL_SIZE=3') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think only one of PROXY_TO_PTHREAD and PTHREAD_POOL_SIZE should be needed (i.e. you don't need a pool if you are proxying your main).
test_proxying_queue_growth(); | ||
|
||
printf("done\n"); | ||
emscripten_force_exit(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just return 0
should be enough I think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It turns out that using return 0
would cause the test to hang.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats is probably worth investigating... but we don't need to do that as part of this PR.
task_queue* tasks = tasks_index == -1 ? NULL : &q->task_queues[tasks_index]; | ||
if (tasks == NULL || tasks->processing) { | ||
// No tasks for this thread or they are already being processed. | ||
pthread_mutex_unlock(&q->mutex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that tasks are dequeud before being run that mutex is unlocked.. why not allow for recursive running of the queue? It certainly seems safe enough.
If we are going to fail due to recursive call perhaps we should return an error code so that we can detect that case when we have queued items but we are refusing the run them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The old proxying API prevents recursive queue processing, which makes sense given how frequently it is processed, although I don't whether it's necessary or not. Since we want to rewrite the old API in terms of the new one, I think it makes sense to be conservative and match the old behavior here, then see what happens if we try to relax it by allowing recursive processing in the future. Another option would be to make the system queue special in the new API and allow recursive processing of all queues except the system queue, but I would rather not special case the system queue like that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll add a TODO to investigate this, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about leaving out this logic and adding it instead to the old API when it calls into this one? (I guess it could be as simple as a CAS with a is_processing
global?). Would make this API slightly simpler and lower level which seems in line with what you are going for maybe?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately this can't be an entirely separate layer. As part of the rewrite of the old API I would like to remove the postMessage handler for the old system and have everything go through the new handler. But the new handler would then need to call into a wrapper function for handling the recursion avoidance logic specific to the system queue. Since this recursion avoidance can't be entirely layered on top of the new system, it seems better to treat it uniformly across all queues. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's possible that removing the recursion guard in a follow-up might Just Work, though. Here's the relevant comment from the old system:
// It is possible that when processing a queued call, the control flow leads back to calling this
// function in a nested fashion! Therefore this scenario must explicitly be detected, and
// processing the queue must be avoided if we are nesting, or otherwise the same queued calls
// would be processed again and again.
You're right that the new system is different in that it dequeues before calling, so it won't run into this specific problem. I wouldn't be surprised if there are other reasons we currently depend on this behavior, so I would rather investigate as a follow-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I even had a PR up for while to remove the nesting limitation.. but I agree there could be other semantic reasons why recursive running is not desirable. I can't think of them.. but I suspect they exist.
lgtm to this change with or without the recursion restriction (perhaps mention in the docs that recursion doesn't currently work, and return failure when we fail to run for this reason).
My WIP PR that I didn't really expect to land that removes the recursion/nesting limitation on the task queue runner: #15190 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great!
One thing I am uncertain is how this relates to the original proxy queue code - looks like the intent is to replace it, but then again, this code does not change any of that, but duplicates(?) a new system queue?
@@ -1197,6 +1197,25 @@ var LibraryPThread = { | |||
worker.postMessage({'cmd' : 'processThreadQueue'}); | |||
} | |||
return 1; | |||
}, | |||
|
|||
_emscripten_notify_proxying_queue: function(targetThreadId, currThreadId, mainThreadId, queue) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we locate this to a .js file of its own, e.g. library_proxy_queue.js
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds nice. I'll look into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It turned out to be pretty simple to move this into its own JS file, but given that the implementation uses internal details of the pthread library (e.g. PThreads.pthreads[targetThreadId]
and pthread.worker
, and given that the existing _emscripten_notify_thread_queue
will be removed in the next PR, I'm not sure there's much benefit to this after all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think its small enough and closely coupled enough to belong here... and as you say we hope it will replace scripten_notify_thread_queue
which is already here.
} else if (e.data.cmd === 'processProxyingQueue') { | ||
if (Module['_pthread_self']()) { // If this thread is actually running? | ||
Module['_emscripten_proxy_execute_queue'](e.data.queue); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This part won't DCE away if user does not utilize this proxying queue.
This might be worth adding a #if LIBRARY_PROXY_QUEUE
preprocessor that gets enabled when the user is targeting the new proxy queue mechanism.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, there wouldn't be any benefit to having that macro once we start using this for the system queue as well, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can wrap this in hasExportedFunction('__emscripten_proxy_execute_queue')
which will have the effect of only including it when needed. (note the double underscores)
system/lib/pthread/proxying.c
Outdated
// The target thread for this task_queue. | ||
pthread_t thread; | ||
// Recursion guard. TODO: We disallow recursive processing because that's what | ||
// the old proxying API does. Experiment with relaxing this restriction. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment reads odd. The sentence We disallow recursive processing because that's what the old proxying API does
suggests like recursion is disallowed because the old queue did it (and it was undesired?), but then the second sentence reads like this wants to enable that as a feature?
Or does the comment want to say that "recursive processing is disallowed here, and it is not a problem since for such use cases, the old proxying API can be used instead?"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to match the old behavior for now to make switching the system queue to the new queue implementation as much of a no-op as possible. Once that switch is done, it might be nice to allow recursive processing just to simplify the system. I'll make the comment more detailed.
return 1; | ||
} | ||
|
||
static void task_queue_deinit(task_queue* tasks) { free(tasks->tasks); } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
task_queue_free()
? (feel free to ignore suggestion)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to keep deinit
because it is more clearly pairs with init
and because it's not tasks
that's being freed, but rather one of its members.
system/lib/pthread/proxying.c
Outdated
static void task_queue_deinit(task_queue* tasks) { free(tasks->tasks); } | ||
|
||
// Not thread safe. | ||
static int task_queue_empty(task_queue* tasks) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
task_queue_is_empty()
? (feel free to ignore suggestion - I always disliked vector::empty() since people can often read its meaning as a verb, e.g. "empty out the trash")
// Get the queue used for proxying low-level runtime work. Work on this queue | ||
// may be processed at any time inside system functions, so it must be | ||
// nonblocking and safe to run at any time, similar to a native signal handler. | ||
em_proxying_queue* emscripten_proxy_get_system_queue(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I interpreted this PR first to offer a user level mechanism for proxying queues? But this function reads like it is intending to supplant the original proxying queue code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition to offering user-level queues, the intention is to reimplement the existing API to use this new queue implementation in a follow-up PR, mostly just to reduce the number of queue implementations we have.
// Execute all the tasks enqueued for the current thread on the given queue. | ||
// Exported for use in worker.js. | ||
EMSCRIPTEN_KEEPALIVE | ||
void emscripten_proxy_execute_queue(em_proxying_queue* q); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this get emitted to all builds, even if user does not use this proxying queue? This should get gated behind a #if LIBRARY_PROXY_QUEUE
if so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I was suggesting this should work is that if any function from system/lib/pthread/proxying.c
is included in the link then this symbol will be exported for JS so the JS can call it (that is how EMSCRIPTEN_KEEPALIVE works).
If there is no dependency on proxying.c
at all then nothing should be exported. This way the JS code and use hasExportedFunction('emscripten_proxy_execute_queue')
and know that it will be exported if and only if the program makes some use of proxying.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tlively can you put EMSCRIPTEN_KEEPALIVE
instead on the defintion in the C file? (or does that not work?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I can make that change.
It looks like the new test_pthread_proxying sometimes hangs when run with asan, so I'll investigate that before landing this. I was also working on the follow up PR to use the new queue implementation for the system queue and had everything working except for similar hangs in test_pthread_proxying, so it's probably the same underlying problem. |
Rewrite the old threading.h proxying API used for internal system call implementations in terms of the new proxying API introduced in #15737.
Rewrite the old threading.h proxying API used for internal system call implementations in terms of the new proxying API introduced in #15737.
Implement a new proxying API that is meant to be suitable both for proxying in
syscall implementations as well as for proxying arbitrary user work. Since the
system proxying queue is processed from so many locations, it is dangerous to
use it for arbitrary work that might take a lock or otherwise block and the work
sent to it has to be structured more like native signal handlers. To avoid this
limitation, the new API allows users to create their own proxying queues that
are processed only when the target thread returns to the JS event loop or when
the user explicitly requests processing.
In contrast to the existing proxying API, this new API:
task queues as necessary instead.
uses statically typed function pointers that take
void*
arguments. Thissimplifies both the API and implementation. Packing of varargs into
dynamically typed function wrappers could easily be layered on top of this
API.
asynchronously to any thread.
done in addition to waiting for the proxied function to return.
A follow-up PR will reimplement the existing proxying API in terms of this new
API.