Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding SegmentMetadataEvent and publishing them via KafkaEmitter #14281

Merged
merged 6 commits into from
Jun 2, 2023

Conversation

harinirajendran
Copy link
Contributor

@harinirajendran harinirajendran commented May 15, 2023

Details

Adding the new SegmentMetadataEvent and publishing these segment-related metadata events into Kafka by enhancing the KafkaEmitter

Description

In this PR, we are enhancing KafkaEmitter, to emit metadata about published segments (SegmentMetadataEvent) into a Kafka topic. This segment metadata information that gets published into Kafka, can be used by any other downstream services to query Druid intelligently based on the segments published. The segment metadata gets published into kafka topic in json string format similar to other events.

Old behavior of Kafka Emitter

Kafka Emitter always emits metrics and alerts and would emit requests if the config request.topic is configured.
Configs metric.topic and alert.topic are always mandatory and cannot be null.

Current behavior of Kafka Emitter [with backwards compatibility]

We introduced a new config named event.types which dictates the types of events we want the KafkaEmitter to emit. This config takes in a list of strings and can have one or more from [alerts, metrics, requests and segmentMetadata]. And based on this config, alert.topic, request.topic, metric.topic and segmentMetadata.topic should be configured and not left empty.
If no event.types is set, then by default, the kafka emitter would emit metrics and alerts. And in that case, to maintain backwards compatibility, decision to send out requests are based on if request.topic is empty or set.

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

@harinirajendran harinirajendran changed the title OBSDATA-440 Adding SegmentMetadataEvent and publishing them via KafkaSegmentMetadataEmitter OBSDATA-440 Adding SegmentMetadataEvent and publishing them via KafkaEmitter May 15, 2023
@harinirajendran
Copy link
Contributor Author

@nishantmonu51 : I have added a new event type in this PR that would cause the following exception to be thrown in AmbariMetricsEmitter and DropWizardEmitter

throw new ISE("unknown event type [%s]", event.getClass());

Should I explicitly filter this new event type from these emitters?

@gianm
Copy link
Contributor

gianm commented May 16, 2023

Should I explicitly filter this new event type from these emitters?

IMO it would make more sense to edit those emitters to ignore unknown event types.

Btw, this may be OK for your use case, but, I wanted to point out that there is a race here: it's possible for the segments to be committed and for the server to crash before it emits this event. So, some might get missed.

@harinirajendran harinirajendran changed the title OBSDATA-440 Adding SegmentMetadataEvent and publishing them via KafkaEmitter Adding SegmentMetadataEvent and publishing them via KafkaEmitter May 16, 2023
@harinirajendran
Copy link
Contributor Author

IMO it would make more sense to edit those emitters to ignore unknown event types.

That makes sense @gianm. I'll update those emitters to ignore the new event type.

@harinirajendran
Copy link
Contributor Author

harinirajendran commented May 16, 2023

Btw, this may be OK for your use case, but, I wanted to point out that there is a race here: it's possible for the segments to be committed and for the server to crash before it emits this event. So, some might get missed.

That's a great point @gianm. Do you have a recommendation for a better place to emit this segment metadata event instead of this place to prevent this?

@gianm
Copy link
Contributor

gianm commented May 17, 2023

That's a great point @gianm. Do you have a recommendation for a better place to emit this segment metadata event instead of this place to prevent this?

I think for it to be "perfect" the best way to do it would be to emit in the place you emit here, but also have some other process that detects missed emits somehow and fixes them up by redoing the missed emits. This would be a lot more complex of an implementation, however. So I'd only recommend doing that if it seems worth it.

To figure that out, I would consider the requirements here. What kind of things are likely to consume the emitted payloads? Could they tolerate either of the following conditions?

  • missed emits (segments that are published, but never emitted)
  • bogus emits (segments that are never published, but were emitted anyway)

If one or both of these can be tolerated, the implementation becomes a lot simpler.

If I understand correctly— the one you have in this PR is the "missed emit" scenario. It won't generate bogus emits, but it can potentially miss some.

@harinirajendran
Copy link
Contributor Author

I think for it to be "perfect" the best way to do it would be to emit in the place you emit here, but also have some other process that detects missed emits somehow and fixes them up by redoing the missed emits. This would be a lot more complex of an implementation, however. So I'd only recommend doing that if it seems worth it.

I think for now we can just go with this implement and enhance it to emit missed segments metadata in the future.

Copy link
Contributor

@ektravel ektravel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Docs: left some questions/suggestions.

@harinirajendran
Copy link
Contributor Author

@harinirajendran can we update the PR description to reflect the current state of the code. It still references the protobuf implementation which is no longer part of this.

Updated it @xvrl . will address other comments now

Copy link
Contributor Author

@harinirajendran harinirajendran left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed review comments.

if (event instanceof ServiceMetricEvent) {
if (!metricQueue.offer(objectContainer)) {
if (!eventTypes.contains(EventType.METRICS) || !metricQueue.offer(objectContainer)) {
Copy link
Contributor Author

@harinirajendran harinirajendran May 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was following the same strategy that was followed for request events earlier

          if (config.getRequestTopic() == null || !requestQueue.offer(objectContainer)) {
            requestLost.incrementAndGet();
          }

if (event instanceof ServiceMetricEvent) {
if (!metricQueue.offer(objectContainer)) {
if (!eventTypes.contains(EventType.METRICS) || !metricQueue.offer(objectContainer)) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@harinirajendran
Copy link
Contributor Author

@xvrl @abhishekagarwal87 I ran org.apache.druid.java.util.emitter.core.HttpEmitterConfigTest.testDefaultsLegacy test locally in my laptop and it passed. But it failed twice when we trigger it as a part of this PR. What do we do in such cases?

@abhishekagarwal87
Copy link
Contributor

@harinirajendran - We will have to fix it. When you run locally, are you running just one test? I will suggest running the same maven command that github action is running. I think that you will run into the same error then.

@harinirajendran
Copy link
Contributor Author

@harinirajendran - We will have to fix it. When you run locally, are you running just one test? I will suggest running the same maven command that github action is running. I think that you will run into the same error then.

Yeah I was just running that one test. let me try using the same command and see if I can reproduce it. Thanks @abhishekagarwal87

@abhishekagarwal87
Copy link
Contributor

@harinirajendran - You should avoid force-push in the future as we can't see the diff from the last commit anymore. can you describe the most recent change?

@harinirajendran
Copy link
Contributor Author

@harinirajendran - You should avoid force-push in the future as we can't see the diff from the last commit anymore. can you describe the most recent change?

Ahh okay! Wasn't aware of that. Will keep that in mind for future PRs. I did not make any code changes. Just rebased the code on top of the latest master and pushed it again.

@abhishekagarwal87 abhishekagarwal87 merged commit 4ff6026 into apache:master Jun 2, 2023
@harinirajendran harinirajendran deleted the upstream-master branch June 2, 2023 16:07
harinirajendran added a commit to confluentinc/druid that referenced this pull request Jun 2, 2023
harinirajendran added a commit to confluentinc/druid that referenced this pull request Jun 7, 2023
harinirajendran added a commit to confluentinc/druid that referenced this pull request Jun 8, 2023
@abhishekagarwal87 abhishekagarwal87 added this to the 27.0 milestone Jul 19, 2023
pagrawal10 pushed a commit to confluentinc/druid that referenced this pull request Nov 28, 2023
pagrawal10 added a commit to confluentinc/druid that referenced this pull request Dec 18, 2023
* Bring dockerfile up to date

* add opencensus extension

* make checkstyle happy

* bump pom version for opencensus extension

* fix issues related to shading opencensus extension

The extension packaging included both shaded and unshaded dependencies
in the classpath. Shading should not be necessary in this case.

Also excludes guava dependencies, which are already provided by Druid
and don't need to be added to the extensions jars.

* METRICS-516: Adding Resource labels in OpenCensus Extension

* bump extension version to match release

* confluent-extensions with custom transform specs (#9)

* fix extraction transform serde (#10)

* fix check-style build errors

* setup semaphore build

* add checkstyle

* fix edge cases for internal topics

* METRICS-1302: Added prefix support for resource labels. (#14)

* METRICS-1302: Added prefix support for resource labels.

* Addressed review comments.

* Added and moved configs to ingestion spec, optimized code.

* Addressed review comments

* Updated metric dimesnion and other review comments

* Flipped ternary operator

* Moved from NullHandling to StringUtils.

* Removed unnecessary HashMap.

* Removed verbosity for instance variables.

* Added getters for configs, labels for distribution metric. (#15)

* Added getters for configs, labels for distribution metric.

* Addressed review comments

* Removed extra brackets in JsonProperty.

* Default resource label prefix to blank - Backward Compatibility (#16)

* update opencensus parent pom version

* update opencensus extensions for 0.19.x

* update parent pom version for confluent-extensions

* Add the capability to speed up S3 uploads using AWS transfer manager

* fix conflicting protobuf dependencies

Align protobuf dependencies to use the main pom one

* fix timestamp milliseconds in OpenCensusProtobufInputRowParser

- fix millisecond resolution being dropped when converting timestamps
- remove unnecessary conversion of ByteBuffer to ByteString
- make test code a little more concise

* improve OpenCensusProtobufInputRowParser performance (#25)

- remove the need to parse timestamps into their own column
- reduce the number of times we copy maps of labels
- pre-size hashmaps and arrays when possible
- use loops instead of streams in critical sections

Combined these changes improve parsing performance by about 15%
- added benchmark for reference

* deprecate OpenCensusInputRowParser in favor of OpenCensusProtobufInputFormat (#26)

InputRowParsers have been deprecated in favor or InputFormat.
This implements the InputFormat version of the OpenCensus Protobuf
parser, and deprecates the existing InputRowParser implementation.

- the existing InputRowParser behavior is unchanged.
- the InputFormat behaves like the InputRowParser, except for the
  default resource prefix which now defaults to "resource." instead of
  empty.
- both implementations internally delegate to OpenCensusProtobufReader,
  which is covered by the existing InputRowParser tests.

* add default query context and update timeout to 30 sec

* Setting default query lane from druid console.

* Giving more heap space for test jvm in semaphore config.

* update parent pom version for Confluent extensions

* Add Java 11 image build and remove unused MySQL images

* fix docker image build failure caused by apache#10506

* switch build to use Java 11 by default

* Fixed forbiddenapi error

* Added phases before checks

* Fixed

* OpenTelemetry Emitter Extension (#47)

Add OpenTelemetry Emitter Extension

* Add dependency check (#59)

* Add dependency check

* Fix maven-dependency-plugin errors

* Add --fail-at-end flag

* Fix comment

* METRICS-3663 OpenTelemetry Metrics InputFormat (#63)

* An OpenTelemetry metrics extension

* An InputFormat that is able to ingest metrics that are in the OpenTelemetry format

* Unit tests for the InputFormat

* Benchmarking Tests for the new OpenTelemetryMetricsProtobufInputFormat

* update parent pom version for Confluent extensions

* Adding getRequiredColumns() in our custom transforms.

* Updating shade-plugin version in opentelemetry-emitter.

* Removing the unwanted maven-shade-plugin change.

* Adding JDK version to DockerFile and removing unwanted executions from main pom.xml file. (#75)

* Passing JDK_VERSION as build args to docker build. (#76)

* Make the OpenTelemetry InputFormat More Flexible to Metric, Value and Attribute Types (#67)

* Hybrid OpenCensusProtobufInputFormat in opencensus-extensions (#69)

* Support OpenTelemetry payloads in OpenCensusProtobufInputFormat
Support reading mixed OpenTelemetry and OpenCensus topics based on Kafka version header

* workaround classloader isolation
Workaround classloader isolation by using method handles to get access
to KafkaRecordEntity related methods and check record headers

Co-authored-by: Xavier Léauté <xl+github@xvrl.net>

* Modify the OpenTelemetry ProtobufReader's Handling of Attribute Types (#77)

* Only handle INT_VALUE, BOOL_VALUE, DOUBLE_VALUE and STRING_VALUE and return null otherwise
* fix wrong class in the DruidModule service provider definition

* Fixing Opencensus extension build failures.

* fix dependency check (#79)

* fix OpenTelemetry extension module service definition (#73) (#81)

* Setting default refresh value for task view as none. (#88)

As part of this we added a default parameter that can be passed for refresh widget to avoid every refresh widget getting affected.

* go/codeowners: Generate CODEOWNERS [ci skip] (#87)

* fixes in pom.xml files

* adapt to new input argument in ParseException

* adapt to the new constructor for DimensionsSpec

* update obs-data team as codeownders (#98)

* [OBSDATA-334] Patch opencensus/opentelemetry parse exception (#99)

* [METRICS-4487] add obs-oncall as codeowners (#101)

* DP-8085 - Migrate to Sempahore self-hosted agent (#100)

* [OBSDATA-334] Patch opentelemetry IllegalStateException for unsupported metric types (#103)

* Fixing checkstyle issues in openncensus and opentelemetry extensions. (#109)

* Remove SNAPSHOT from versions in confluent pom files

* Fixing CI/CD in 24.0.0 upgrade branch (#116)

* OBSDATA-440 Adding SegmentMetadataEvent and publishing them via KafkaSegmentMetadataEmitter (#117)

* Change unsupported type message from WARN to TRACE (#119)

* Use place holder for logging invalid format (#120)

Use place holder for logging invalid format for better performance

* DP-9370 - use cc-service-bot to manage Semaphore project (#118)

* chore: update repo semaphore project

* DP-9632: remediate duplicate Semaphore workflows (#121)

Only build the master branch and the `x.x.x-confluent` Druid release branches by default

* chore: update repo semaphore project

* Bump version to 24.0.1 in confluent extensions after rebasing on top of druid-24.0.1

* Bump version to 24.0.2 in confluent extensions after rebasing on top of druid-24.0.2

* OBSDATA-483: Adapt OpenCensus and OpenTelemetry extensions to the introduction of SettableByteEntity (#113)

* OBSDATA-483: Adapt opencensus extension to the introduction of SettableByteEntity

* OBSDATA-483: Adapt opentelemetry extension to the introduction of SettableByteEntity

* OBSDATA-483: Decide which reader to instantiate on read between opencensus and opentelemetry

* OBSDATA-483: Add logger config in opencensus tests

* OBSDATA-483: Fix issue with opening the byte entity

* OBSDATA-483: Instantiate the right iterator in every read request

* OBSDATA-483: Add comments

* OBSDATA-483: Address Xavier's comments

* OBSDATA-483: Remove unused member fields

* OBSDATA-483: Rename enum

* OBSDATA-483: Fix trace log to actually print the argument

* OBSDATA-483: Keep passing the underlying byte buffer and move its position explicitly

* OBSDATA-483: Fix checkstyle issues

* OBSDATA-483: Add back handling of InvalidProtocolBufferException

* OBSDATA-483: Extend the semaphore workflow execution time to 2 hours

* Revert "OBSDATA-483: Extend the semaphore workflow execution time to 2 hours"

* OBSDATA-483: Don't close iterator in sample

* chore: update repo semaphore project (#124)

Co-authored-by: Confluent Jenkins Bot <jenkins@confluent.io>

* [Metrics-4776] OpenTelemetry Extensions - Upgrade otel-proto version (#125)

* Upgrade proto version
* Fix names and tests - Upgrade version
* Fix open census tests
* Fix test name

* Move to Java 17 (#128)

* bumping version of java to 17 for semaphore test run

* bumping java version to 17 as per https://github.com/confluentinc/druid/pull/127/files

* After speaking with Xavier, made these changes

* Trying to add required flags to run druid using java 17 (#130)

* Use apache-jar-resource-bundle:1.5 instead of 1.5-SNAPSHOT (apache#14054) (#131)

Co-authored-by: Tejaswini Bandlamudi <96047043+tejaswini-imply@users.noreply.github.com>

* update parent pom version for Confluent extensions

* Fix CI/CD while upgrading to Druid 25.0.0

* Fix jest and prettify checks

* Adding SegmentMetadataEvent and publishing them via KafkaEmitter (apache#14281) (#139)

(cherry picked from commit 4ff6026)

* Downgrade busybox version to fix k8s IT (apache#14518) (#143)

Co-authored-by: Rishabh Singh <6513075+findingrish@users.noreply.github.com>

* Passing TARGETARCH in build_args to Docker build (#144)

* Downgrade busybox version to fix k8s IT (apache#14518)

* Add TargetArch needed in distribution/Dockerfile

* Fix linting

---------

Co-authored-by: Rishabh Singh <6513075+findingrish@users.noreply.github.com>

* remove docker-maven-plugin and Dockerfile customizations

- remove our custom profile to build using dockerfile-maven-plugin,
since that plugin is no longer maintained.

- remove our custom Dockerfile patches since we can now use the
  BUILD_FROM_SOURCE argument to decide if we want to build the tarball
  outside of docker.

* Revert "Trying to add required flags to run druid using java 17 (#130)" (#147)

This reverts our custom patch from commit 7cf2de4.

The necessary Java 17 exports are now included as part of 25.0.0
in https://github.com/confluentinc/druid/blob/25.0.0-confluent/examples/bin/run-java#L27-L56
which is now called by the druid.sh docker startup script as well.

The exports for java.base/jdk.internal.perf=ALL-UNNAMED are no longer
needed since apache#12481 (comment)

* removing use of semaphore cache as the public semaphore will not have cache (#145) (#148)

* utilize workflow level caching to publish the built
artifacts to the tests. otherwise turn off all caching of .m2 etc

* remove .m2/settings.xml to ensure build passes without internal artifact store

---------

Co-authored-by: Jeremy Kuhnash <111304461+jkuhnashconfluent@users.noreply.github.com>

* OBSDATA-1365: add support for debian based base images (#149)

* Debeian based base image upgrade

* updated suggestions

* Update Dockerfile

* minor correction

---------

* Revert "fix KafkaInputFormat with nested columns by delegating to underlying inputRow map instead of eagerly copying (apache#13406) (apache#13447)" (#155)

This reverts commit 23500a4.

* Filter Out Metrics with NoRecordedValue Flag Set (#157)

Metrics that contain the NoRecordedValue Flag are being written to Druid with a 0 value. We should properly handle them in the backend

* memcached cache: switch to AWS elasticache-java-cluster-client and add TLS support  (apache#14827) (#159)

This PR updates the library used for Memcached client to AWS Elasticache Client : https://github.com/awslabs/aws-elasticache-cluster-client-memcached-for-java

This enables us to use the option of encrypting data in transit:
Amazon ElastiCache for Memcached now supports encryption of data in transit

For clusters running the Memcached engine, ElastiCache supports Auto Discovery—the ability for client programs to automatically identify all of the nodes in a cache cluster, and to initiate and maintain connections to all of these nodes.
Benefits of Auto Discovery - Amazon ElastiCache

AWS has forked spymemcached 2.12.1, and has since added all the patches included in 2.12.2 and 2.12.3 as part of the 1.2.0 release. So, this can now be considered as an equivalent drop-in replacement.

GitHub - awslabs/aws-elasticache-cluster-client-memcached-for-java: Amazon ElastiCache Cluster Client for Java - enhanced library to connect to ElastiCache clusters.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/elasticache/AmazonElastiCacheClient.html#AmazonElastiCacheClient--

How to enable TLS with Elasticache

On server side:
https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/in-transit-encryption-mc.html#in-transit-encryption-enable-existing-mc

On client side:
GitHub - awslabs/aws-elasticache-cluster-client-memcached-for-java: Amazon ElastiCache Cluster Client for Java - enhanced library to connect to ElastiCache clusters.

* PRSP-3603 Bump org.xerial.snappy:snappy-java to latest version to address CVEs (#164)

* Bump org.xerial.snappy:snappy-java from 1.1.8.4 to 1.1.10.5

* Add licenses

* [backport] Upgrade Avro to latest version (apache#14440) (#162)

Upgraded Avro to 1.11.1

(cherry picked from commit 72cf91f)

Co-authored-by: Tejaswini Bandlamudi <96047043+tejaswini-imply@users.noreply.github.com>

* Revert "PRSP-3603 Bump org.xerial.snappy:snappy-java to latest version to address CVEs (#164)" (#166)

This reverts commit 185d655.

* Upgrade Avro to latest version to address CVEs (#167)

* OBSDATA-1697: Do not build extensions not loaded by cc-druid (#152)

Create new profiles to enable only the used extensions during the build. This helps address CVEs that were being flagged due to the unused extensions.
---------

Co-authored-by: Keerthana Srikanth <ksrikanth@confluent.io>

* update parent pom version for Confluent extensions

* Add value to child POMs

* Upgrade dependencies to match upstream v28 & checkstyle fix

* KafkaEmitter changes

* Modifying RowFunction interface

* Fix test cases

* Fix test cases

* Fix test cases

* Fix test cases

* upgrade dependency as per druid 28

* Removing unnecessary change

* Change Maven repository URL

* Add Druid.xml

* Update tag name to match version

* Fix dist-used profile to use Hadoop compile version (#173)

* Changes based on PR comments

* Fix refreshButton

* Use onRefresh only once

* Fix snapshot so that the test passes

---------

Co-authored-by: Travis Thompson <trthomps@confluent.io>
Co-authored-by: Sumit Arrawatia <sumit.arrawatia@gmail.com>
Co-authored-by: Xavier Léauté <xvrl@apache.org>
Co-authored-by: Apoorv Mittal <amittal@confluent.io>
Co-authored-by: Xavier Léauté <xavier@confluent.io>
Co-authored-by: Huajun Qin <hqin@yahoo.com>
Co-authored-by: Huajun Qin <huajun@confluent.io>
Co-authored-by: CodingParsley <nayachen98@gmail.com>
Co-authored-by: Harini Rajendran <hrajendran@confluent.io>
Co-authored-by: Ivan Vankovich <ivankovich@c02yt5a0lvdr.attlocal.net>
Co-authored-by: Ivan Vankovich <ivankovich@confluent.io>
Co-authored-by: Marcus Greer <marcusgreer96@gmail.com>
Co-authored-by: Harini Rajendran <harini.rajendran@yahoo.com>
Co-authored-by: Yun Fu <fuyun12345@gmail.com>
Co-authored-by: Xavier Léauté <xl+github@xvrl.net>
Co-authored-by: lokesh-lingarajan <llingarajan@confluent.io>
Co-authored-by: Luke Young <91491244+lyoung-confluent@users.noreply.github.com>
Co-authored-by: Konstantine Karantasis <konstantine@confluent.io>
Co-authored-by: Naya Chen <nchen@confluent.io>
Co-authored-by: nlou9 <39046184+nlou9@users.noreply.github.com>
Co-authored-by: Corey Christous <cchristous@gmail.com>
Co-authored-by: Confluent Jenkins Bot <jenkins@confluent.io>
Co-authored-by: ConfluentTools <96149134+ConfluentTools@users.noreply.github.com>
Co-authored-by: Kamal  Narayan <119908061+kamal-narayan@users.noreply.github.com>
Co-authored-by: David Steere <hampycapper@msn.com>
Co-authored-by: Tejaswini Bandlamudi <96047043+tejaswini-imply@users.noreply.github.com>
Co-authored-by: Ghazanfar-CFLT <mghazanfar@confluent.io>
Co-authored-by: Rishabh Singh <6513075+findingrish@users.noreply.github.com>
Co-authored-by: Jeremy Kuhnash <111304461+jkuhnashconfluent@users.noreply.github.com>
Co-authored-by: Hardik Bajaj <58038410+hardikbajaj@users.noreply.github.com>
Co-authored-by: Michael Li <mli@confluent.io>
Co-authored-by: Keerthana Srikanth <ksrikanth@confluent.io>
Co-authored-by: Jan Werner <105367074+janjwerner-confluent@users.noreply.github.com>
Co-authored-by: mustajibmk <120099779+mustajibmk@users.noreply.github.com>
Co-authored-by: Pankaj kumar <pkumar@confluent.io>
pagrawal10 pushed a commit to confluentinc/druid that referenced this pull request Dec 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants