-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use scrape manager #6
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #6 +/- ##
=========================================
- Coverage 13.90% 9.60% -4.31%
=========================================
Files 5 4 -1
Lines 266 281 +15
=========================================
- Hits 37 27 -10
- Misses 226 252 +26
+ Partials 3 2 -1 ☔ View full report in Codecov by Sentry. |
MetricsPath: "metrics", | ||
ScrapeInterval: model.Duration(time.Duration(*p.configuration.ScrapeIntervalSeconds) * time.Second), | ||
ScrapeTimeout: model.Duration(time.Duration(*p.configuration.ScrapeTimeoutSeconds) * time.Second), | ||
BodySizeLimit: units.Base2Bytes(*p.configuration.BodySizeLimitBytes), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This Base2Bytes
conversion isn't really done anywhere else. I would lean towards removing it and accepting whatever value customer sets. If they want, they can set 1024, or 1000. Is there any strict requirement that the number needs to be in base2?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's already getting converted to int64
again in the scraper, so it's just to satisfy the type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah nvm. I missed that BodySizeLimit
is in units.Base2Bytes.
server/configuration.go
Outdated
} | ||
targets := []*targetgroup.Group{ | ||
{Labels: promModel.LabelSet{ | ||
promModel.AddressLabel: promModel.LabelValue(host + ":" + port), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
server/configuration.go
Outdated
targets := make([]*targetgroup.Group, len(nodes)) | ||
for i, node := range nodes { | ||
targets[i].Labels = promModel.LabelSet{ | ||
promModel.AddressLabel: promModel.LabelValue(node.Hostname + ":" + port), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as the other comment
server/configuration.go
Outdated
} | ||
|
||
sync := map[string][]*targetgroup.Group{ | ||
"prometheus": make([]*targetgroup.Group, 1), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are overwriting this key anyways. Any reason to pre-allocate the slice which anyways gets overwritten?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah no, it's forgotten.
// once we start supporting HA, we will need to listen the cluster change channel and | ||
// convert the []mmodel.ClusterDiscovery entries into map[string][]*targetgroup.Group | ||
p.waitGroup.Add(1) | ||
go func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for clarity, you don't really need a separate goroutine at this point, but we are creating it for when we will need HA?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly, there will be select
to listen different channels for HA. Therefore I added the comment above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nicely done 👌
Summary
This PR replaces #4 and #5 and basically we are using the prometheus scrape manager to delegate scraping work. For simplicity it's better choice for now. We can stop/start as we need if we need to do operations on the tsdb.