Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cache limits for resources and attributes #509

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

srikanthccv
Copy link
Member

The main objective of this change was to prevent the rouge datasets from adversely affecting others. The shortcoming of a single set of resource + attribute fingerprints is that all resources are treated equally, when in practice, some resources are more cardinal than others, so Instead of maintaining a single set for a combination of resource fingerprint + attrs fingerprint, we now maintain one key for each resource fingerprint, which maintains the attribute fingerprints set. To prevent the key explosion, a separate set for tracking the number of unique resource fingerprints (configured with max_resources) for each data source is maintained (Some users add the timestamp of the log records as resource attributes; we don't want to accept such data as part of this).

The (configurable) limits are that there can only be a maximum of 8192 resources for a data source for the current window, and each data source can have a maximum of 2048 unique attribute fingerprints. Since any data can go into attributes, we want to limit the attribute's fingerprints as well. We have several filtering by layer that is intended to filter out all high cardinal values before they reach the fingerprint creation.

  1. We filter out the attributes that have more than X distinct values seen
  2. We run a goroutine that fetches the distinct count of attribute values for each attribute (because DB would have the complete data and pre-filters them.

This greatly reduces the unique fingerprint however, even if the distinct values are not higher, there can be some attributes that have 10-20 values, their combinations can result in a high number of unique attribute fingerprints, so we want to limit them as well. This is the max_cardinality_per_resource. Even if the combinations are below, the number of resources X Number of attributes can be ~17mil, which we don't want to allow, so there is a total maximum cardinality allowed for each data source in the current window, configured with max_total_cardinality . All of these settings have some defaults; they are based on our observations in monitoring our system and choosing sensible defaults. We may tweak some of them as we learn more.

@srikanthccv srikanthccv marked this pull request as ready for review January 16, 2025 12:16
@grandwizard28
Copy link
Collaborator

grandwizard28 commented Jan 16, 2025

Here's what's happening:

We are using one set of the form %s:metadata:%s:%d:resources per signal, per tenant to store unique fingerprints of resouces. If this exceeds max_resources (8192), we don't proceed.

We are using sets of the form %s:metadata:%s:%d:resources:<fingerprint> per resource, per signal, per tenant which can be total of max_resources (8192). The max number of values that one such set can have is max_cardinality_per_resource (2048).

So, for one tenant and one signal, the max number of sets will be 8192 + 1.
For one tenant across 3 signals, that is 8193 * 3 = 24,579 sets.

For each payload we doing the following:

SCARD(%s:metadata:%s:%d:resources)

WHILE cursor DO
  cursor:= SCAN(%s:metadata:%s:%d:resources) --> expensive
  SCARD(cursor)
DONE

FOR resource, attributes DO
   attributes_diff = SMISMEMBER(%s:metadata:%s:%d:resources:<fingerprint>, attributes)
DONE

FOR resource, attribute_diff DO
     SCARD(%s:metadata:%s:%d:resources)
     SADD(resource)
     SCARD(%s:metadata:%s:%d:resources:<fingerprint>)
     SADD(attribute_diff)
DONE

I propose the following structure:

  1. Use 1 redis hash called %s:metadata:%s:%d:resources to store <resource-fingerprint>:<num-attributes>.
  2. Use 1 single set called %s:metadata:%s:%d:resources:attributes per signal. Use <resource-fingerprint>:<attribute-fingerprint> as the values

So for each payload, we have

FOR resource, attributes DO
   fingerprints = <resource-fingerprint>:<attribute-fingerprint>
DONE

HLEN(%s:metadata:%s:%d:resources) --> max_resouces_check

HMGET(%s:metadata:%s:%d:resources) --> per_attribute_check

diff = SADD(%s:metadata:%s:%d:resources:attributes, fingerprints)

[Result of SADD and HMGET will give the updated values]

HMSET(%s:metadata:%s:%d:resources,diff)
  1. Less number of calls to redis. 4 calls per batch.
  2. Less number of keys created. (2 per tenant per signal).
  3. Redis hashes are fast and memory efficient.

Bonus points if this can be a lua script.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants