-
Notifications
You must be signed in to change notification settings - Fork 528
Add an option to Kestrel to disable threadpool dispatching #1390
Comments
@halter73 @davidfowl This is now an InternalKestrelServerOptions, is the plan to make it a KestrelServerOptions? |
I don't think there are currently plans to put it on KestrelServerOptions proper. We are using the internal options in some of our benchmarks, and the improvement seen by disabling threadpool dispatching is negligible in netcoreapp2.0. |
I did a benchmark using two 8 core Azure machines (D4 I think) a la Techempower plaintext it was using 1.1 runtime, don't know if that makes a big difference. |
Form the looks of aspnet/benchmarks it seems no dispatching is ran with kestrelthreads equal to 1/2. |
Threadpool has been improved in 2.0. No-dispatching means the application code blocks I/O. Plaintext reposnse stream is just a memory copy of a cached set of 13 bytes so no real application work (just server work). With threadpool the total I/O thread count being less that the physical core count makes sense as you still need CPU for the threadpool to do the application work. Not dispatching introduces Head-of-Line blocking; (which was one of the "failures" of http1.1 pipelining); where a slow request on one connection will stop the processing of all other connections on the same thread; regardless of CPU being free. That's not to mention if the user application code makes a sync(blocking) Read/Write Task.Wait() or Task.Result, sync SQL Connect/Execute call when you've knocked out an entire Kestrel thread of I/O processing. Perhaps just dispatching the application code to the thread pool; but keeping the Kestrel server code on the same thread might seem like a good comprise between the two? However, picking a site at random... For the new .NET docs api browser site my browser sends 7kB of request headers, mostly cookie, which has to be parsed by the server and is CPU bound work. So again parsing that 7kB to create the header dictionary, request object etc will block all I/O on that thread, regardless of whether there is CPU is free. Whereas all the I/O has been done for that request. So on balance, for the general case, it likely works out better to dedicate the Kestrel I/O threads to doing the I/O, then dispatching the processing of the data to spare CPU while the I/O thread goes back to getting the next bit of data. Otherwise you'll have very unbalanced and poor utilization of CPU. Just my 2c... |
I mean 1 or 2.
I think to make it a fair comparison, it should be 6 when kestrelThreadPoolDispatch is false. |
6 being the number of cores on an Intel® Xeon® Processor E5-1650. |
I think tweaking on and off at this point is a bit academic. There are enough shifting parts and variables to get a solid baseline for this release. Including new transport types. When the dust has settled I think the whole threading subject could be revisited including numa dispatching, where nics are located relative to sockets, where the pools are etc. |
I agree it makes sense to defer request handling to the threadpool to avoid the issues caused by bad application code. What are your thoughts on the threadcount used for the benchmark? |
Don't know the rational, at a guess, its examining the overhead associated with dispatching to the threadpool vs not? |
@tmds I think you're right, we need to change the benchmark. If we're using libuv threads make threads = number of cores or number of cores * 2. |
Related topic: Kestrel ThreadCount. So Netty does double amount of cpus ('logical processors'). When dispatching every request to the threadpool it makes sense to have a lower number of io threads. There is more load to parse the request and generate a response than there is to get data from/to the kernel. |
From http://stackoverflow.com/questions/5474372/how-netty-uses-thread-pools Per end point there is a boss thread and a worker thread pool. The worker thread pool defaults to 2 times the number of cores.
|
The approach matches well with what @benaadams explained before. |
No description provided.
The text was updated successfully, but these errors were encountered: