Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Redis cluster client #45

Closed
wants to merge 3 commits into from
Closed

Redis cluster client #45

wants to merge 3 commits into from

Conversation

h4lflife
Copy link

Redis cluster client.

This is implementated as a wrapper over the existing resty-redis client adding cluster functionality. Basic functionality except pipelining is working. Working on pipelining support.

  • Each cluster is identified by a "cluster id".
  • Cluster representation identified by "cluster id" is shared across all requests within a worker process.
  • Supports multiple redis clusters.
  • Connection pooling as present in the existing resty-redis client is available.
  • Performance nearly same as original resty-redis under normal conditions.

ref: https://github.com/antirez/redis-rb-cluster

Example:

local redis_cluster = require("redis_cluster")

local cluster_id = "test_cluster"

-- Subset of nodes within the cluster
local startup_nodes = { 
    {"127.0.0.1", 7004}, 
    {"127.0.0.1", 7000}, 
    {"127.0.0.1", 7001}
}

local opt = { 
    timeout = 100,
    keepalive_size = 100,
    keepalive_duration = 60000
}

local rc = redis_cluster:new(cluster_id, startup_nodes, opt)

rc:initialize()

local ok, err = rc:set("key1", "val1")
if not ok then
    ngx.say("Unable to set key1: ", err)
else
    ngx.say("key1 set result: ", ok) 
end

local res, err = rc:get("key1")
if not res then
    ngx.say("Failed to get key1: ", err)
else
    ngx.say("key1:", res)
end

-- (same as above, slightly faster)
res, err = rc:send_cluster_command("get", "key1")
if not res then
    ngx.say("Failed to get key1 with send_cluster_command: ", err)
else
    ngx.say("key1 using send_cluster_command:", res)
end

Open Issues:

  • At the time of initialization(once per worker) and cluster reconfiguration, all requests during the time window of the data refresh(should be quite small, few ms) will try to update the cluster representation. It doesn't seem to have an effect on functionality but the condition needs to be removed as it would cause a small spike in latency during cluster reconfiguration. Would like to have opinion on this one.
  • Pipeline "asking" request during cluster reconfiguration.

Please review and let me know of any comments.

@agentzh
Copy link
Member

agentzh commented May 22, 2014

@h4lflife Thank you for the contribution! I really appreciate it :) I'll look into your patch when I have some spare time. Been busy with $work lately, sorry :) Thanks again!

@misiek08
Copy link

misiek08 commented Apr 4, 2015

It's old pull request but I have question, because I don't know if I understand your code correctly.

You have hard-coded limit for max. 20 clusters and 500 nodes in each cluster, right? Or is Lua allocating space for 20 and 500 entries and if we exceed that it's allocating bigger memory space?

If yes (there are limits), I think one call for your code should be one cluster, and the nodes should be stored in dynamically sized array.

@zhduan
Copy link

zhduan commented Jun 4, 2015

@agentzh @h4lflife Are there any current activities on this pull request? We need the cluster support too and love to see it moved forward.

@agentzh
Copy link
Member

agentzh commented Jun 13, 2015

@zhduan My hunch is that it's better to be implemented in a separate wrapper library, in the same spirit of @pintsized's lua-resty-redis-connector

["shutdown"] = true
}

local band, bor, bxor = bit.band, bit.bor, bit.bxor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi,i'm a newer of github,when i use your redis_cluster, i met a problem at this line:attempt to index local 'bit' (a boolean value), i'm wondering how to solve this problem,can you help me? thanks a lot!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i have solved my problem. it's my own mistake.Now i'm using redis_cluster with redis3,0.1 and it works pretty well.

@wyTrivail
Copy link

thanks for implementing it, but i have a problem that whether there are connection pool in your code?


local cluster_slots = cluster.slots

for slot_index = 9, #fields do

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here,if the slots in one node is 0-1665 5461-7127 10923-12588, there will be some lose slots when add slot cache

@byliu byliu mentioned this pull request Nov 24, 2016
@jzh800
Copy link

jzh800 commented Mar 31, 2017

Whether to support password access?
like the red:auth("foobared").

@wjs57y
Copy link

wjs57y commented Jul 20, 2017

@agentzh 请问怎么配置验证密码

@agentzh
Copy link
Member

agentzh commented Jul 20, 2017

@wjs57y Please, no Chinese here. This place is considered English only. It is especially rude to reply to an unrelated pure English issue thread. If you really
want to use Chinese, please join the openresty (Chinese) mailing list instead.
Please see https://openresty.org/en/community.html Thanks for your cooperation.

@harryin777
Copy link

harryin777 commented Sep 27, 2021

Here is my error msg : Uninitialized cluster. I follow the Example, what`s wrong? Is there anyone have a same problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.