You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Add a cache for Native offset to IL Offset mapping in the stack walking logic (#117218)
- This cache reduces an issue where the locking around the `DebuggerJitInfo` is extremely inefficient to access in a multithreaded system, or if MANY different methods are on the stack
- Since we do not have extensive experience with this cache, its size is configurable
- This allows us to discover scenarios in production where the cache is insufficiently big, and address problems as needed
- The initial size is 1024 which is expected to be fairly ok, at a cost of 16KB of space on 64 systems.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
// This is an implementation of a cache of the Native->IL offset mappings used by managed stack traces. It exists for the following reasons:
1017
+
// 1. When a large server experiences a large number of exceptions due to some other system failing, it can cause a tremendous number of stack traces to be generated, if customers are attempting to log.
1018
+
// 2. The native->IL offset mapping is somewhat expensive to compute, and it is not necessary to compute it repeatedly for the same IP.
1019
+
// 3. Often when these mappings are needed, the system is under stress, and throwing on MANY different threads with similar callstacks, so the cost of having locking around the cache may be significant.
1020
+
//
1021
+
// The cache is implemented as a simple hash table, where the key is the IP + fAdjustOffset
1022
+
// flag, and the value is the IL offset. We use a version number to indicate when the cache
1023
+
// is being updated, and to indicate that a found value is valid, and we use a simple linear
1024
+
// probing algorithm to find the entry in the cache.
1025
+
//
1026
+
// The replacement policy is randomized, and there are s_stackWalkCacheWalk(8) possible buckets to check before giving up.
1027
+
//
1028
+
// Since the cache entries are greater than a single pointer, we use a simple version locking scheme to protect readers.
1029
+
1030
+
structStackWalkNativeToILCacheEntry
1031
+
{
1032
+
void* ip = NULL; // The IP of the native code
1033
+
uint32_t ilOffset = 0; // The IL offset, with the adjust offset flag set if the native offset was adjusted by STACKWALK_CONTROLPC_ADJUST_OFFSET
1034
+
};
1035
+
1036
+
static LONG s_stackWalkNativeToILCacheVersion = 0;
1037
+
static DWORD s_stackWalkCacheSize = 0; // This is the total size of the cache (We use a pointer+4 bytes for each entry, so on 64bit platforms 12KB of memory)
1038
+
const DWORD s_stackWalkCacheWalk = 8; // Walk up to this many entries in the cache before giving up
1039
+
const DWORD s_stackWalkCacheAdjustOffsetFlag = 0x80000000; // 2^31, put into the IL offset portion of the cache entry to check if the native offset was adjusted by STACKWALK_CONTROLPC_ADJUST_OFFSET
if (VolatileLoadWithoutBarrier(&cacheTable[index].ip) == ip)
1060
+
{
1061
+
// Cache hit
1062
+
uint32_t dwILOffset = VolatileLoad(&cacheTable[index].ilOffset); // It is IMPORTANT that this load have a barrier after it, so that the version check in the containing funciton is safe.
1063
+
if (fAdjustOffset != ((dwILOffset & s_stackWalkCacheAdjustOffsetFlag) == s_stackWalkCacheAdjustOffsetFlag))
1064
+
{
1065
+
continue; // The cache entry did not match on the adjust offset flag, so move to the next entry.
1066
+
}
1067
+
1068
+
dwILOffset &= ~s_stackWalkCacheAdjustOffsetFlag; // Clear the adjust offset flag
0 commit comments