Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor state_* metricsets to share response from endpoint #25640

Merged
merged 17 commits into from
May 18, 2021

Conversation

ChrsMark
Copy link
Member

@ChrsMark ChrsMark commented May 10, 2021

What does this PR do?

This PR changes how kubernetes module handle state_* metricsets which share same target endpoint. The idea originates from https://github.com/elastic/beats/blob/master/x-pack/metricbeat/module/cloudfoundry/cloudfoundry.go

Note: At this point the PR stands more like a PoC with changes only applied at state_container and state_pod metricsets.

If we agree with this solution (and make sure that it would be applied with Agent too) we can extend it to the rest of state_* metricsets as well as to metricsets fetch from kubelet's endpoint (node, pod, container, volume, system)

In upcoming PR we will apply similar strategy for metricsets fetching from kubelet's endpoint (node, pod, container, volume, system)

Why is it important?

To improve the performance of the module by avoid fetching same content multiple times.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Author's Checklist

  • Does it have impact for Agent deployments too?
    Verified with different module's config blocks. See step 4 of testing notes below.

How to test this PR locally

  1. On a k8s cluster install kube_state_metrics (https://github.com/kubernetes/kube-state-metrics/tree/master/examples/standard)
  2. Expose kube_state_metrics endpoint on the local machine -> kubectl -n kube-system port-forward svc/kube-state-metrics 8081:8080
  3. Start capturing traffic on kube-state-metrics' endpoint -> tcpdump -i any -s 0 'tcp[((tcp[12:1] & 0xf0) >> 2):4] = 0x47455420'
  4. Test with different module's config blocks (what Agent does):
- module: kubernetes
  metricsets:
    - state_pod
  period: 10s
  hosts: ["0.0.0.0:8081"]
  add_metadata: false

- module: kubernetes
metricsets:
- state_container
period: 10s
hosts: ["0.0.0.0:8081"]
add_metadata: false

Verify with tcpdump's output that only one request takes place no matter how many modules/metricsets are enabled.

  1. Test one combined module block:
- module: kubernetes
  metricsets:
    - state_pod
    - state_container
  period: 10s
  hosts: ["0.0.0.0:8081"]
  add_metadata: false

Verify with tcpdump's output that only one request takes place no matter how many modules/metricsets are enabled.

Related issues

Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
@ChrsMark ChrsMark added refactoring Team:Integrations Label for the Integrations team kubernetes Enable builds in the CI for kubernetes labels May 10, 2021
@ChrsMark ChrsMark self-assigned this May 10, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations (Team:Integrations)

@botelastic botelastic bot added needs_team Indicates that the issue/PR needs a Team:* label and removed needs_team Indicates that the issue/PR needs a Team:* label labels May 10, 2021
@ChrsMark
Copy link
Member Author

@exekias, @jsoriano I would love your feedback on this. @kaiyan-sheng might be interested into this too since sth similar could be applied on AWS related stuff too.

I still need to verify if this change will have impact on Agent's deployments too since I'm not sure if Agent spawns metricsets directly per data_stream. If that's the case then this change will only have impact for Beats' deployments.

@elasticmachine
Copy link
Collaborator

elasticmachine commented May 10, 2021

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: Pull request #25640 updated

  • Start Time: 2021-05-18T10:46:36.269+0000

  • Duration: 116 min 51 sec

  • Commit: 6a4b0e7

Test stats 🧪

Test Results
Failed 0
Passed 8200
Skipped 2351
Total 10551

Trends 🧪

Image of Build Times

Image of Tests

💚 Flaky test report

Tests succeeded.

Expand to view the summary

Test stats 🧪

Test Results
Failed 0
Passed 8200
Skipped 2351
Total 10551

Copy link
Contributor

@exekias exekias left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments.

This approach could work, I wonder if it will make things too complex? I wonder if it would make sense to put everything together in the same metricset, did you consider that option?

Comment on lines 66 to 69
if m.prometheus == nil {
m.prometheus = prometheus
}
go m.runStateMetricsFetcher(period)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a race condition here. If this function is called by two metricsets at the same time you could end up with several fetchers running.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, what if different metricsets have a different period?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, what if different metricsets have a different period?

Period is defined at the module level, so all metricsets will have the same period.

Copy link
Member Author

@ChrsMark ChrsMark May 11, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both cases should be handled by https://github.com/elastic/beats/pull/25640/files#diff-9933f42aec81125910c9923f83fa89c9a76918aac6e3bd5a68c430aeeab91084R98-R106.

Taking into consideration that period is set on Module's level then I think that different period's check is redundant. Metricsets will not share the input if they are configured in different Modules like:

- module: kubernetes
  metricsets:
    - state_pod
- module: kubernetes
  metricsets:
    - state_container

So the period adjustment at https://github.com/elastic/beats/pull/25640/files#diff-9933f42aec81125910c9923f83fa89c9a76918aac6e3bd5a68c430aeeab91084R100 could be removed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

- module: kubernetes
  metricsets:
    - state_pod
- module: kubernetes
  metricsets:
    - state_container

Take into account that this configuration will instantiate two different modules, and they won't reuse the shared families. If we want/need to supports configs like these ones, this approach won't work. To reuse the families we would need one of:

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you investigate what Agent is doing with a config like that? I think doing this at the module level is good enough if the Agent handles it correctly, and removes the complexity of indexing by endpoint, metrics path and other parameters (including the security part).

@ChrsMark
Copy link
Member Author

Left some comments.

This approach could work, I wonder if it will make things too complex? I wonder if it would make sense to put everything together in the same metricset, did you consider that option?

It had been discussed and rejected in the past like at #12938 and recently we also end up with keeping as is and coming with this improvement.

Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some suggestions that could simplify things.

Comment on lines 66 to 69
if m.prometheus == nil {
m.prometheus = prometheus
}
go m.runStateMetricsFetcher(period)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, what if different metricsets have a different period?

Period is defined at the module level, so all metricsets will have the same period.

metricbeat/module/kubernetes/kubernetes.go Outdated Show resolved Hide resolved
metricbeat/helper/prometheus/prometheus.go Outdated Show resolved Hide resolved
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for beats, we will have to check if this solves the issue in Agent (#25640 (comment)). Added only a small suggestion.

metricbeat/module/kubernetes/kubernetes.go Outdated Show resolved Hide resolved
Signed-off-by: chrismark <chrismarkou92@gmail.com>
metricbeat/module/kubernetes/kubernetes.go Outdated Show resolved Hide resolved
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
@ChrsMark
Copy link
Member Author

Tested that with Agent and figured out that metircsets are spawned under different Module objects so I changed the implementation to use global cache on Module's level shared across all Module's instances.

Did some manual tests and it seems from the logs that context is reused. I will test it more.

Agent's standalone config:

id: 322f76f0-b3eb-11eb-9eaa-250cd6228ac5
revision: 2
outputs:
  default:
    type: elasticsearch
    hosts:
      - 'http://elasticsearch:9200'
    username: elastic
    password: changeme
output_permissions:
  default:
    _fallback:
      cluster:
        - monitor
      indices:
        - names:
            - logs-*
            - metrics-*
            - traces-*
            - .logs-endpoint.diagnostic.collection-*
            - synthetics-*
          privileges:
            - auto_configure
            - create_doc
agent:
  monitoring:
    enabled: false
    logs: false
    metrics: false
inputs:
  - id: 403033a2-494d-43a7-9802-8435b9d17ca4
    name: kubernetes-1
    revision: 1
    type: kubernetes/metrics
    use_output: default
    meta:
      package:
        name: kubernetes
        version: 0.5.1
    data_stream:
      namespace: default3
    streams:
      - id: >-
          kubernetes/metrics-kubernetes.state_container-403033a2-494d-43a7-9802-8435b9d17ca4
        data_stream:
          dataset: kubernetes.state_container
          type: metrics
        metricsets:
          - state_container
        add_metadata: true
        hosts:
          - '0.0.0.0:8081'
        period: 10s
      - id: >-
          kubernetes/metrics-kubernetes.state_pod-403033a2-494d-43a7-9802-8435b9d17ca4
        data_stream:
          dataset: kubernetes.state_pod
          type: metrics
        metricsets:
          - state_pod
        add_metadata: true
        hosts:
          - '0.0.0.0:8081'
        period: 10s

Some sample Debug logs:

{"log.level":"warn","@timestamp":"2021-05-14T18:11:25.406+0300","log.logger":"debug(10s[0.0.0.0:8081])","log.origin":{"file.name":"kubernetes/kubernetes.go","file.line":86},"message":"DIFF: state_container != state_pod","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2021-05-14T18:11:25.406+0300","log.logger":"debug(10s[0.0.0.0:8081])","log.origin":{"file.name":"kubernetes/kubernetes.go","file.line":90},"message":"FETCH families for ms: state_container. Last setter was state_pod","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2021-05-14T18:11:25.406+0300","log.logger":"debug(10s[0.0.0.0:8081])","log.origin":{"file.name":"kubernetes/kubernetes.go","file.line":90},"message":"FETCH families for ms: state_container. Last setter was state_pod","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2021-05-14T18:11:35.404+0300","log.logger":"debug (10s[0.0.0.0:8081])","log.origin":{"file.name":"kubernetes/kubernetes.go","file.line":86},"message":"DIFF: state_pod != state_container","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2021-05-14T18:11:35.404+0300","log.logger":"debug (10s[0.0.0.0:8081])","log.origin":{"file.name":"kubernetes/kubernetes.go","file.line":95},"message":"REUSE families for ms: state_pod. Last setter was state_container","ecs.version":"1.6.0"}
{"log.level":"warn","@timestamp":"2021-05-14T18:11:35.404+0300","log.logger":"debug (10s[0.0.0.0:8081])","log.origin":{"file.name":"kubernetes/kubernetes.go","file.line":95},"message":"REUSE families for ms: state_container. Last setter was state_container","ecs.version":"1.6.0"}

@jsoriano jsoriano self-requested a review May 17, 2021 09:27
metricbeat/module/kubernetes/kubernetes.go Show resolved Hide resolved
defer m.lock.Unlock()

now := time.Now()
hash := fmt.Sprintf("%s%s", m.BaseModule.Config().Period, m.BaseModule.Config().Hosts)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use a common function to generate the hash, this hash is being calculated also in the function returned by ModuleBuilder().

And/or consider doing the initialization of m.fCache[hash] here so it is not needed to calculate the hash when initializing the module.

Comment on lines 77 to 78
m.lock.Lock()
defer m.lock.Unlock()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This lock is in the module, but the cache is shared between all modules. The cache should have its own lock, and be the same for all modules/metricsets using it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch thanks!

fCache.lastFetchTimestamp = now
fCache.setter = ms
} else {
m.logger.Warn("REUSE families for ms: ", ms, ". Last setter was ", fCache.setter)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debug?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeap I will remove them completely.

Comment on lines 84 to 86
if ms != fCache.setter {
m.logger.Warn("DIFF[ms!=cacheSetter]: ", ms, " != ", fCache.setter)
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this only to report that the metricset getting the families is different to the metricset that stored it? Not sure if needed, in any case log it at the debug level.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be removed.

}

// New create a new instance of the MetricSet
// Part of new is also setting up the configuration by processing additional
// Part of newF is also setting up the configuration by processing additional
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo?

Suggested change
// Part of newF is also setting up the configuration by processing additional
// Part of new is also setting up the configuration by processing additional

sharedFamiliesCache := make(cacheMap)
return func(base mb.BaseModule) (mb.Module, error) {
hash := fmt.Sprintf("%s%s", base.Config().Period, base.Config().Hosts)
sharedFamiliesCache[hash] = &familiesCache{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These entries will be never removed, this can be a leak if metricbeat is used to monitor clusters dynamically created. I guess this is only a corner case, we can leave this by now.

Copy link
Member Author

@ChrsMark ChrsMark May 17, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will leave a comment about this in the code so as to have a good pointer if an issue arise in the future. One thing we could do (on top of my head suggestion follows) to tackle this could be to have a method on module level to figure out what entries to remove, which method will be called from Metricset's Close().

defer m.lock.Unlock()

now := time.Now()
hash := fmt.Sprintf("%s%s", m.BaseModule.Config().Period, m.BaseModule.Config().Hosts)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the period needs to be part of the hash key, it is ok if metricsets with the same hosts but different period share the families.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, I will remove it.

Signed-off-by: chrismark <chrismarkou92@gmail.com>
@mergify
Copy link
Contributor

mergify bot commented May 17, 2021

This pull request is now in conflicts. Could you fix it? 🙏
To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/

git fetch upstream
git checkout -b refactor_state_metrics upstream/refactor_state_metrics
git merge upstream/master
git push upstream refactor_state_metrics

@ChrsMark ChrsMark added v7.14.0 test-plan Add this PR to be manual test plan labels May 17, 2021
@jsoriano jsoriano self-requested a review May 17, 2021 18:14
// NOTE: These entries will be never removed, this can be a leak if
// metricbeat is used to monitor clusters dynamically created.
// (https://github.com/elastic/beats/pull/25640#discussion_r633395213)
sharedFamiliesCache[hash] = &familiesCache{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This map is being written every time a module is created. As it is now, I see two possible problems:

  • There can be race conditions (and panics) if several metricsets are created at the same time (not sure if possible), or if a metricset calls GetSharedFamilies while other metricset with the same hosts is being created (I guess this can happen with bad luck and/or with a low metricbeat.max_start_delay).
  • If a metricset is created after another one has already filled the cache, the cache will be reset, not a big problem, but could be easily solved by checking if the cache entry exists.

I think reads and writes on this map should be also thread safe. And ideally we should check if there is some entry in the cache for a given key before overwriting it here.

Signed-off-by: chrismark <chrismarkou92@gmail.com>
Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this is looking good. Only some nitpicking comments added.

Comment on lines 93 to 94
fCache := m.kubeStateMetricsCache.cacheMap[hash]
if _, ok := m.kubeStateMetricsCache.cacheMap[hash]; !ok {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
fCache := m.kubeStateMetricsCache.cacheMap[hash]
if _, ok := m.kubeStateMetricsCache.cacheMap[hash]; !ok {
fCache, ok := m.kubeStateMetricsCache.cacheMap[hash]
if !ok {


fCache := m.kubeStateMetricsCache.cacheMap[hash]
if _, ok := m.kubeStateMetricsCache.cacheMap[hash]; !ok {
return nil, fmt.Errorf("Could not get kube_state_metrics cache entry for %s ", hash)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return nil, fmt.Errorf("Could not get kube_state_metrics cache entry for %s ", hash)
return nil, fmt.Errorf("could not get kube_state_metrics cache entry for %s ", hash)

}

func (m *module) GetSharedFamilies(prometheus p.Prometheus) ([]*dto.MetricFamily, error) {
now := time.Now()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be done after getting the lock? If not, if a call to GetFamilies takes more than the period, all waiting metricsets will request the families again instead of reusing the ones just received, and the waiting metricset that end up requesting the families again will store an "old" timestamp.

}

func generateCacheHash(host []string) string {
return fmt.Sprintf("%s", host)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider using something like https://github.com/mitchellh/hashstructure for hashing.

cacheMap: make(map[string]*familiesCache),
}
return func(base mb.BaseModule) (mb.Module, error) {
hash := generateCacheHash(base.Config().Hosts)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle the hash is always going to be the same during the life of this module. Wdyt about storing it in module{} so it doesn't need to be recalculated every time? Actually, for the same reason, the module could keep a reference to the cache entry directly.

Signed-off-by: chrismark <chrismarkou92@gmail.com>
Signed-off-by: chrismark <chrismarkou92@gmail.com>
Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@ChrsMark ChrsMark merged commit 96481f1 into elastic:master May 18, 2021
ChrsMark added a commit to ChrsMark/beats that referenced this pull request May 18, 2021
ChrsMark added a commit that referenced this pull request May 18, 2021
@andresrc andresrc added the test-plan-added This PR has been added to the test plan label Jun 29, 2021
@ChrsMark ChrsMark mentioned this pull request Jun 8, 2022
8 tasks
@ChrsMark ChrsMark mentioned this pull request Aug 4, 2022
6 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kubernetes Enable builds in the CI for kubernetes refactoring Team:Integrations Label for the Integrations team test-plan Add this PR to be manual test plan test-plan-added This PR has been added to the test plan v7.14.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants