Skip to content

Redis data store: Why are per-process keys required for MIN/MAX aggreggations? #1

@olofhe

Description

@olofhe

Hi!

I might be missing something, but I don't understand the need to keep individual Redis hash keys per process in order to support MIN and MAX aggregation of metrics.

It seems completely possible to only keep one key for the maximum or minimum reported value in Redis.

For MAX, simply just write the new value v_new to key k iff:

  • k has no old value v_old

OR

  • v_old < v_new

and the same for MIN, but with a flipped inequality sign.

AFAIK, there is no built-in Redis function to do this atomically in a single command (i.e. without the risk of some other process modifying the value in between read/write), but atomicity can easily be ensured using EVAL with a Lua script.

Here's an example ruby function that does this (extra verbose for readability).

def redis_set_if_greater(redis, key, val)
  lua_script =
    "local key = KEYS[1]; " \
    "local val = ARGV[1]; " \
    "local current_val = redis.call('get', key); " \
    "if (not current_val) or (tonumber(current_val) < tonumber(val)) then " \
      "redis.call('set', key, val); " \
    "end"
  redis.eval(lua_script, [key], [val])
end

Example program:

r = Redis.new
r.del "mykey"
[2, 1, 3, 2].each do |x|
  redis_set_if_greater(r, "mykey", x)
  puts "set #{x} => #{r.get("mykey")}"
end

Output:

set 2 => 2
set 1 => 2
set 3 => 3
set 2 => 3

Is there something I haven't considered? :)

Best,
Olof

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions