Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporterqueue] Bare minimum frame of queue batcher + unit test. #11532

Merged
merged 10 commits into from
Oct 26, 2024

Conversation

sfc-gh-sili
Copy link
Contributor

@sfc-gh-sili sfc-gh-sili commented Oct 23, 2024

Description

This PR is a bare minimum implementation of a component called queue batcher. On completion, this component will replace consumers in queue_sender, and thus moving queue-batch from a pulling model instead of pushing model.

Limitations of the current code

  • This implements only the case where batching is disabled, which means no merge of splitting of requests + no timeout flushing.
  • This implementation does not enforce an upper bound on concurrency

All these code paths are marked as panic currently, and they will be replaced with actual implementation in coming PRs. This PR is split from #11507 for easier review.

Design doc:
https://docs.google.com/document/d/1y5jt7bQ6HWt04MntF8CjUwMBBeNiJs2gV4uUZfJjAsE/edit?usp=sharing

Link to tracking issue

#8122
#10368

Testing

Documentation

@sfc-gh-sili sfc-gh-sili changed the title Batcher just the frame [exporterqueue] Frame of queue batcher + unit test. Oct 23, 2024
@sfc-gh-sili sfc-gh-sili marked this pull request as ready for review October 23, 2024 22:30
@sfc-gh-sili sfc-gh-sili requested a review from a team as a code owner October 23, 2024 22:30
@sfc-gh-sili sfc-gh-sili requested a review from codeboten October 23, 2024 22:30
@sfc-gh-sili sfc-gh-sili changed the title [exporterqueue] Frame of queue batcher + unit test. [exporterqueue] Bare minimum frame of queue batcher + unit test. Oct 23, 2024
Copy link

codecov bot commented Oct 24, 2024

Codecov Report

Attention: Patch coverage is 88.57143% with 8 lines in your changes missing coverage. Please review.

Project coverage is 91.45%. Comparing base (dfc232e) to head (5dd36b5).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
exporter/internal/queue/fake_request.go 66.66% 8 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #11532      +/-   ##
==========================================
- Coverage   91.46%   91.45%   -0.01%     
==========================================
  Files         435      438       +3     
  Lines       23757    23827      +70     
==========================================
+ Hits        21729    21791      +62     
- Misses       1650     1658       +8     
  Partials      378      378              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

exporter/internal/queue/batcher.go Outdated Show resolved Hide resolved
exporter/internal/queue/batcher.go Outdated Show resolved Hide resolved
exporter/internal/queue/batcher.go Outdated Show resolved Hide resolved
exporter/internal/queue/batcher.go Show resolved Hide resolved
@sfc-gh-sili sfc-gh-sili requested a review from dmitryax October 24, 2024 05:06
queue Queue[internal.Request]
maxWorkers int

exportFunc func(context.Context, internal.Request) error
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this not Request.export?

Copy link
Contributor Author

@sfc-gh-sili sfc-gh-sili Oct 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point.
This was modeled after consumeFunc used in consumer but it seems having req.Export() here is enough for the use case.

idxList []uint64
}

type Batcher struct {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on the struct please.

}
}

// If preconditions pass, flush() take an item from the head of batch list and exports it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In golang, comments start with func name. flush

exporter/internal/queue/batcher.go Outdated Show resolved Hide resolved
Comment on lines 63 to 68
go func() {
qb.flush(batchToFlush)
qb.stopWG.Done()
}()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a best practice to ensure function gets called on all possible "return paths". Even though now is a simple func may get more complicated.

Suggested change
go func() {
qb.flush(batchToFlush)
qb.stopWG.Done()
}()
go func() {
defer qb.stopWG.Done()
qb.flush(batchToFlush)
}()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the advice!

exporter/internal/queue/batcher.go Outdated Show resolved Hide resolved
return err
}

if qb.batchCfg.Enabled {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think would be simpler if Batcher is an interface, and the "not enabled" and "enabled" are 2 different implementations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good idea.
Implemented a special-case batcher called DisabledBatcher to handle the "not enabled" case. Let me know what you think of it.

@sfc-gh-sili sfc-gh-sili force-pushed the sili-queue-tiny branch 2 times, most recently from 20a8bb1 to fc658a0 Compare October 24, 2024 20:43
stopWG sync.WaitGroup
}

func NewBatcher(batchCfg exporterbatcher.Config, queue Queue[internal.Request], maxWorkers int) Batcher {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make it return error instead of panic.

@dmitryax dmitryax added the Skip Changelog PRs that do not require a CHANGELOG.md entry label Oct 25, 2024
@dmitryax dmitryax merged commit 3e13ca1 into open-telemetry:main Oct 26, 2024
48 of 49 checks passed
@github-actions github-actions bot added this to the next release milestone Oct 26, 2024
dmitryax pushed a commit that referenced this pull request Oct 29, 2024
#### Description

This PR follows
#11532 and
implements support for limited worker pool for queue batcher.

Design doc:

https://docs.google.com/document/d/1y5jt7bQ6HWt04MntF8CjUwMBBeNiJs2gV4uUZfJjAsE/edit?usp=sharing

#### Link to tracking issue

#8122
#10368
@sfc-gh-sili sfc-gh-sili deleted the sili-queue-tiny branch November 12, 2024 23:20
djaglowski pushed a commit to djaglowski/opentelemetry-collector that referenced this pull request Nov 21, 2024
…n-telemetry#11532)

#### Description

This PR is a bare minimum implementation of a component called queue
batcher. On completion, this component will replace `consumers` in
`queue_sender`, and thus moving queue-batch from a pulling model instead
of pushing model.

Limitations of the current code
* This implements only the case where batching is disabled, which means
no merge of splitting of requests + no timeout flushing.
* This implementation does not enforce an upper bound on concurrency

All these code paths are marked as panic currently, and they will be
replaced with actual implementation in coming PRs. This PR is split from
open-telemetry#11507 for
easier review.

Design doc:

https://docs.google.com/document/d/1y5jt7bQ6HWt04MntF8CjUwMBBeNiJs2gV4uUZfJjAsE/edit?usp=sharing


#### Link to tracking issue

open-telemetry#8122
open-telemetry#10368
djaglowski pushed a commit to djaglowski/opentelemetry-collector that referenced this pull request Nov 21, 2024
HongChenTW pushed a commit to HongChenTW/opentelemetry-collector that referenced this pull request Dec 19, 2024
…n-telemetry#11532)

#### Description

This PR is a bare minimum implementation of a component called queue
batcher. On completion, this component will replace `consumers` in
`queue_sender`, and thus moving queue-batch from a pulling model instead
of pushing model.

Limitations of the current code
* This implements only the case where batching is disabled, which means
no merge of splitting of requests + no timeout flushing.
* This implementation does not enforce an upper bound on concurrency

All these code paths are marked as panic currently, and they will be
replaced with actual implementation in coming PRs. This PR is split from
open-telemetry#11507 for
easier review.

Design doc:

https://docs.google.com/document/d/1y5jt7bQ6HWt04MntF8CjUwMBBeNiJs2gV4uUZfJjAsE/edit?usp=sharing


#### Link to tracking issue

open-telemetry#8122
open-telemetry#10368
HongChenTW pushed a commit to HongChenTW/opentelemetry-collector that referenced this pull request Dec 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Skip Changelog PRs that do not require a CHANGELOG.md entry
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants