Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelize requests #126

Closed
zemek opened this issue Dec 7, 2017 · 5 comments · Fixed by #256
Closed

Parallelize requests #126

zemek opened this issue Dec 7, 2017 · 5 comments · Fixed by #256

Comments

@zemek
Copy link

zemek commented Dec 7, 2017

It looks like the exporter queries each redis node sequentially, would it be possible to parallelize these requests?

@oliver006
Copy link
Owner

oliver006 commented Dec 9, 2017

Yeah, it'd be possible to parallelize them but I'm wondering, how many redis instances are you monitoring with one exporter?
Generally, best practices recommend using one exporter (e.g. as a sidecar in a K8s pod) per redis instance so parallelization shouldn't really be needed.

@cosmopetrich
Copy link

Monitoring multiple redises from a single instance of the redis_exporter can be handy for cases where it's not easy to run a separate instance of the exporter for each redis instance. Redises running on AWS ElastiCache or Azure Redis Cache come to mind as an example of this.

I suppose that another solution to the same problem might be to allow the exporter's target to be set at scrape-time by Prometheus itself in a manner similar to the Blackbox exporter. That would let Prometheus handle the parallelization and allow service discovery to control the list of external redises. That's probably a much more drastic change though.

@lukmdo
Copy link

lukmdo commented Aug 13, 2018

as for custom scrape param
prometheus/client_golang#267

Allowing "Parallel requests" seem not bed in general however that would require refactor removing most of global state. I am not sure if that is there by design / use case - as prom can do accounting/aggregation or perhaps OK to get rid/replace with stateless constMetrics @oliver006 (e.metrics, e.duration, e.scrapeErrors ...)?

@oliver006
Copy link
Owner

Yeah, doing something similar to what you linked to would be one way of doing it. Then Prometheus would scrape the exporter several times and pass a different Redis instance address for each scrape. And you're right, a good bit of the global state would need to get removed.

However, I agree with @cosmopetrich - that would be a rather big change and I'm not sure if there's an urgent need for this.
How many redis instances are people trying to scrape with just one exporter?

@CantankerousBullMoose
Copy link

How many redis instances are people trying to scrape with just one exporter?

Well, probably more than one. In some environments (like Docker) it's tough to pair each exporter with a redis instance the way kubernetes supports. So if I have several redis instances running behind a dnsrr service discover name, I can't make the exporter resolve to the dynamically generated IP and hostname of each individual container backing that service.

And one exporter for every instance is probably overkill, right? Ideally, you would probably have one exporter for every, I don't know, five or ten redis instances. If you implemented this change, I could run multiple redis instances as a docker service. Prometheus can resolve the service endpoint to a list of specific container IPs, and any instance of the exporter could process metrics for any redis instance. So if the exporter also runs as a Docker service, you could scale it up proportionate to the number of redis instances without it being one to one, and use Docker's built in load balancing to distribute calls from Prometheus to the exporters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants