You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The counters above are an example while the web server wasn't serving any requests. Run Bombardier again with the `api/diagscenario/tasksleepwait` endpoint and sustained load for 2 minutes so there's plenty of time to observe what happens to the performance counters.
@@ -131,38 +144,47 @@ The counters above are an example while the web server wasn't serving any reques
ThreadPool starvation occurs when there are no free threads to handle the queued work items and the runtime responds by increasing the number of ThreadPool threads. You should observe the `ThreadPool Thread Count` rapidly increase to 2-3x the number of processor cores on your machine and then further threads are added 1-2 per second until eventually stabilizing somewhere above 125. The slow and steady increase of ThreadPool threads combined with CPU Usage much less than 100% are the key signals that ThreadPool starvation is currently a performance bottleneck. The thread count increase will continue until either the pool hits the maximum number of threads, enough threads have been created to satisfy all the incoming work items, or the CPU has been saturated. Often, but not always, ThreadPool starvation will also show large values for `ThreadPool Queue Length` and low values for `ThreadPool Completed Work Item Count`, meaning that there's a large amount of pending work and little work being completed. Here's an example of the counters while the thread count is still rising:
147
+
ThreadPool starvation occurs when there are no free threads to handle the queued work items and the runtime responds by increasing the number of ThreadPool threads. You should observe the `dotnet.thread_pool.thread.count` rapidly increase to 2-3x the number of processor cores on your machine and then further threads are added 1-2 per second until eventually stabilizing somewhere above 125. The slow and steady increase of ThreadPool threads combined with CPU Usage much less than 100% are the key signals that ThreadPool starvation is currently a performance bottleneck. The thread count increase will continue until either the pool hits the maximum number of threads, enough threads have been created to satisfy all the incoming work items, or the CPU has been saturated. Often, but not always, ThreadPool starvation will also show large values for `dotnet.thread_pool.queue.length` and low values for `dotnet.thread_pool.work_item.count`, meaning that there's a large amount of pending work and little work being completed. Here's an example of the counters while the thread count is still rising:
135
148
136
149
```dotnetcli
137
-
Press p to pause, r to resume, q to quit.
138
-
Status: Running
139
-
140
150
[System.Runtime]
141
-
% Time in GC since last GC (%) 0
142
-
Allocation Rate (B / 1 sec) 24,480
143
-
CPU Usage (%) 0
144
-
Exception Count (Count / 1 sec) 0
145
-
GC Committed Bytes (MB) 56
146
-
GC Fragmentation (%) 40.603
147
-
GC Heap Size (MB) 89
148
-
Gen 0 GC Count (Count / 1 sec) 0
149
-
Gen 0 Size (B) 6,306,160
150
-
Gen 1 GC Count (Count / 1 sec) 0
151
-
Gen 1 Size (B) 8,061,400
152
-
Gen 2 GC Count (Count / 1 sec) 0
153
-
Gen 2 Size (B) 192
154
-
IL Bytes Jitted (B) 279,263
155
-
LOH Size (B) 98,576
156
-
Monitor Lock Contention Count (Count / 1 sec) 0
157
-
Number of Active Timers 124
158
-
Number of Assemblies Loaded 121
159
-
Number of Methods Jitted 3,227
160
-
POH (Pinned Object Heap) Size (B) 1,197,336
161
-
ThreadPool Completed Work Item Count (Count / 1 sec) 2
Once the count of ThreadPool threads stabilizes, the pool is no longer starving. But if it stabilizes at a high value (more than about three times the number of processor cores), that usually indicates the application code is blocking some ThreadPool threads and the ThreadPool is compensating by running with more threads. Running steady at high thread counts won't necessarily have large impacts on request latency, but if load varies dramatically over time or the app will be periodically restarted, then each time the ThreadPool is likely to enter a period of starvation where it's slowly increasing threads and delivering poor request latency. Each thread also consumes memory, so reducing the total number of threads needed provides another benefit.
0 commit comments