-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sliceable dispatchers: Provide alternative to newSingle/FixedThreadPoolContext via a shared pool of threads #261
Comments
Why not define a JVM-like
It is also possible consider to introduce:
Usually the GC pressure is enough to cover accidentally blocks. I consider a different question to create a flexible thread pool and mixing blocking and nonblocking operations in it, are you considering to benchmark a prototype?
ForkJoinPool uses a task queue for each thread, |
@fvasco Unfortunately, the The appropriate value of The questions of a flexible thread-pool and blocking IO are indeed different, but it looks that they can be solved with a single implementation effort. We'll definitely start implementation with benchmarks and experiment with different strategies. The idea is that similarly to FJP, this implementation is going to be "sticky" and will only move coroutines to another thread when absolutely necessarily. We plan to be way more lazy in this respect than FJP (it can be shown that FJP work-stealing strategy actually has adverse performance impact on a typical CSP-style code). With respect to blocking operations it means that we'll give some time for a blocking operations to complete before moving all other coroutines to another thread. It seems to be the most efficient strategy based on our study of other languages and libraries, but we'll see how it actually works out in practice. |
Hi @elizarov, Regarding Semaphore like a dispatcher I suspected your consideration too late, sorry for this miss. However I consider a valid option to use a |
I say yes, may be something like: fun newDedicatedThreadDispatcher(
threadPoolSize: Int = 1,
threadFactory: ThreadFactory? = Executors.defaultThreadFactory()
): CoroutineDispatcher |
* The will be replaced by another mechanism in the future. See #261 for details. * The proposed replacement is to use the standard java API: Executors.newSingleThreadExecutor/newFixedThreadPool and convert to dispatcher via asCoroutineDispatcher() extension.
I believe we shall, because we can. Something like Moreover, I don't think we need a separate factory method to create a single threaded pool. It seems like an obvious case of a fixed-size pool, even if some optimization is happening under the covers. Using |
What should I do if I need a grantee that my thread pool is never blocked (for more than 50ms)? For example I use a coroutine as a timer:
This code fails if for some reason all the threads in the pool are blocked (do CPU-bound work) and the scheduler is unable to switch this coroutine to a thread on time. Will the new pool be able to battle that? |
* The will be replaced by another mechanism in the future. See #261 for details. * The proposed replacement is to use the standard java API: Executors.newSingleThreadExecutor/newFixedThreadPool and convert to dispatcher via asCoroutineDispatcher() extension.
Regarding my consideration above, now is available the issue #1088. |
Hi, I saw the deprecation notice of https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/new-single-thread-context.html Is there a recommended, current way to execute coroutines with FIFO semantics? Specifically I need to sequence access of SQLite database to occur from a single thread. |
You should not use the coroutine. |
@fvasco thank you for your reply. However I am now more confused: why is coroutines not the right tool here? My thought was that callers that require SQLite could await sub-operations dispatched to the SQLite dispatcher that uses only a single thread. Currently in my application this uses a concrete |
It is possible to run multiple coroutine concurrently on a single thread, but you want to "sequence access of SQLite database to occur from a single thread".
Put the logic in a function, example:
|
Any reason why would this executor be better than mutex? |
The reason is that it does a different thing than mutex:
|
I'm really looking forward to this @elizarov. Is there any ETA / milestone for that feature? |
Sorry, I cannot give any specific ETA at the moment. |
Related: https://discuss.kotlinlang.org/t/coroutine-dispatcher-confined-to-a-single-thread/17978 I would need Is there anything I could use in the meantime? |
@alamothe In the forum thread you ask:
This is not what this proposal is supposed to do. In this proposal even with a limit of 1 thread it will still jump between threads, but it will ensure that at most 1 thread is used at any time. We don't have anything out-of-the box to cover your needs. You'll have to write your own dispatcher that supports it. |
This is actually what I need. The requirement is to prevent unintentional multi-threading (because our code is not thread-safe). My original post didn't state this in the best possible way, but later I arrived at this conclusion. Thanks for checking! |
Is there any plan to support this for Kotlin/Native? |
UPDATE: Due to backward-compatibility requirements the actual design will likely be different. Stay tuned. |
Hello. Please, are there any updates on the design? |
Closing this in the favour of #2919 Let's continue our discussion there |
Background
newFixedThreadPoolContext
is actively used in coroutines code as a concurrency-limiting mechanism. For example, to limit a number of concurrent request to the database to 10 one typically defines:and then wraps all DB invocation into
withContext(DB) { ... }
blocks.This approach have the following problems:
withContext(DB)
invocation performs an actual switch to a different thread which is extremely expensive.newFixedThreadPoolContext
references the underlying threads and must be explicitly closed when no longer used. This is quite error-prone as programmers may usenewFixedThreadPoolContext
in their code without realizing this fact, thus leaking threads.Solution
The plan is to reimplement
newFixedThreadPoolContext
from scratch so that it does not create any threads. Instead, there will be one shared pool of threads that creates new thread strictly when they are needed. Thus,newFixedThreadPoolContext
does not create its own threads, but acts only as a semaphore that limits the number of concurrent operations running in this context.Moreover,
DefaultContext
, which is currently equal toCommonPool
(backed byForkJointPool.commonPool
), is going to be redefined in this way:Now, with this redefinition of
DefaultContext
the code that is used to define its ownDB
context continues to work as before (limiting the number of concurrent DB operations). However, both issues identified above are solved:withContext(DB)
invocation does not actually perform thread context switch anymore. It only switches coroutine context and separately keeps track of and limits the number of concurrently running coroutines inDB
context.newFixedThreadPoolContext
anymore, as it is not backed by any physical threads, no risk of leaking threads.This change also affects
newSingleThreadContext
as its implementation is:This issue is related to the discussion on
IO
dispatcher in #79. It is inefficient to useExecutors.newCachedThreadPool().toCoroutineContext()
due to the thread context switches. The plan, as a part of this issue, is to define the following constant:Coroutines working in this context share the same thread pool as
DefaultContext
, so there is no cost of thread switch when doingwithContext(IO) { ... }
, but there is no inherent limit on the number of such concurrently executed operations.Open questions
Shall we rename
newFixedThreadPoolContext
andnewSingleThreadContext
after this rewrite or leave their names as is? Can we name it better?Should we leave
newSingleThreadContext
defined as before (with all the context switch cost) to avoid potentially breaking existing code? This would work especially well ifnewFixedThreadPoolContext
is somehow renamed (old is deprecated), butnewSingleThreadContext
retains the old name.UPDATE: Due to backward-compatibility requirements the actual design will likely be different. Stay tuned.
The text was updated successfully, but these errors were encountered: