Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use scrape manager #6

Merged
merged 13 commits into from
Dec 21, 2023
Merged

Use scrape manager #6

merged 13 commits into from
Dec 21, 2023

Conversation

isacikgoz
Copy link
Member

Summary

This PR replaces #4 and #5 and basically we are using the prometheus scrape manager to delegate scraping work. For simplicity it's better choice for now. We can stop/start as we need if we need to do operations on the tsdb.

@isacikgoz isacikgoz added the 2: Dev Review Requires review by a core committer label Dec 20, 2023
@codecov-commenter
Copy link

codecov-commenter commented Dec 20, 2023

Codecov Report

Attention: 90 lines in your changes are missing coverage. Please review.

Comparison is base (9b4153b) 13.90% compared to head (ed33d3e) 9.60%.

Files Patch % Lines
server/plugin.go 0.00% 54 Missing ⚠️
server/configuration.go 0.00% 36 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff            @@
##           master      #6      +/-   ##
=========================================
- Coverage   13.90%   9.60%   -4.31%     
=========================================
  Files           5       4       -1     
  Lines         266     281      +15     
=========================================
- Hits           37      27      -10     
- Misses        226     252      +26     
+ Partials        3       2       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

MetricsPath: "metrics",
ScrapeInterval: model.Duration(time.Duration(*p.configuration.ScrapeIntervalSeconds) * time.Second),
ScrapeTimeout: model.Duration(time.Duration(*p.configuration.ScrapeTimeoutSeconds) * time.Second),
BodySizeLimit: units.Base2Bytes(*p.configuration.BodySizeLimitBytes),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This Base2Bytes conversion isn't really done anywhere else. I would lean towards removing it and accepting whatever value customer sets. If they want, they can set 1024, or 1000. Is there any strict requirement that the number needs to be in base2?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's already getting converted to int64 again in the scraper, so it's just to satisfy the type.

https://github.com/prometheus/prometheus/blob/c83e1fc5748be3bd35bf0a31eb53690b412846a4/scrape/scrape.go#L270

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah nvm. I missed that BodySizeLimit is in units.Base2Bytes.

}
targets := []*targetgroup.Group{
{Labels: promModel.LabelSet{
promModel.AddressLabel: promModel.LabelValue(host + ":" + port),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

targets := make([]*targetgroup.Group, len(nodes))
for i, node := range nodes {
targets[i].Labels = promModel.LabelSet{
promModel.AddressLabel: promModel.LabelValue(node.Hostname + ":" + port),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the other comment

}

sync := map[string][]*targetgroup.Group{
"prometheus": make([]*targetgroup.Group, 1),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are overwriting this key anyways. Any reason to pre-allocate the slice which anyways gets overwritten?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah no, it's forgotten.

// once we start supporting HA, we will need to listen the cluster change channel and
// convert the []mmodel.ClusterDiscovery entries into map[string][]*targetgroup.Group
p.waitGroup.Add(1)
go func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for clarity, you don't really need a separate goroutine at this point, but we are creating it for when we will need HA?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, there will be select to listen different channels for HA. Therefore I added the comment above.

Copy link
Member

@agnivade agnivade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicely done 👌

@isacikgoz isacikgoz added 3: Reviews Complete All reviewers have approved the pull request and removed 2: Dev Review Requires review by a core committer labels Dec 21, 2023
@isacikgoz isacikgoz merged commit f87dfae into master Dec 21, 2023
4 checks passed
@isacikgoz isacikgoz deleted the scrape-manager branch December 21, 2023 14:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3: Reviews Complete All reviewers have approved the pull request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants