From 7c6b6f95329d7841af61aa05aeb6b865102f791e Mon Sep 17 00:00:00 2001 From: Bharathwaj G Date: Wed, 10 Jul 2024 11:45:07 +0530 Subject: [PATCH] Startree file formats codec merge (#29) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Fix flaky test in range aggregation yaml test (#14486) Signed-off-by: bowenlan-amzn * Use CODECOV_TOKEN (#14536) Signed-off-by: Prudhvi Godithi * [Tiered Caching] Moving query recomputation logic outside of write lock (#14187) * Moving query recompute out of write lock Signed-off-by: Sagar Upadhyaya * [Tiered Caching] Moving query recomputation logic outside of write lock Signed-off-by: Sagar Upadhyaya * Adding java doc for the completable map Signed-off-by: Sagar Upadhyaya * Changes to call future handler only once per key Signed-off-by: Sagar Upadhyaya * Fixing spotless check Signed-off-by: Sagar Upadhyaya * Added changelog Signed-off-by: Sagar Upadhyaya * Addressing comments Signed-off-by: Sagar Upadhyaya * Fixing gradle fail Signed-off-by: Sagar Upadhyaya * Addressing comments to refactor unit test Signed-off-by: Sagar Upadhyaya * minor UT refactor Signed-off-by: Sagar Upadhyaya --------- Signed-off-by: Sagar Upadhyaya Signed-off-by: Sagar <99425694+sgup432@users.noreply.github.com> Co-authored-by: Sagar Upadhyaya * Fix Flaky Test ClusterRerouteIT.testDelayWithALargeAmountOfShards (#14510) Signed-off-by: kkewwei kkewwei@163.com Signed-off-by: kkewwei kkewwei@163.com Signed-off-by: kkewwei * Add doc for debugging rest tests (#14491) * add doc for debugging rest tests Signed-off-by: bowenlan-amzn * Update TESTING.md Co-authored-by: Marc Handalian Signed-off-by: bowenlan-amzn * Address comment Signed-off-by: bowenlan-amzn --------- Signed-off-by: bowenlan-amzn Co-authored-by: Marc Handalian * Fix flaky DefaultCacheStatsHolderTests (#14462) Signed-off-by: Peter Alfonsi Co-authored-by: Peter Alfonsi * [AUTO] [main] Add bwc version 2.15.1. (#14549) * Add bwc version 2.15.1 Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> * Fix auto-generated version Signed-off-by: Andrew Ross --------- Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Signed-off-by: Andrew Ross Co-authored-by: opensearch-ci-bot <83309141+opensearch-ci-bot@users.noreply.github.com> Co-authored-by: Andrew Ross * Fix flaky test TieredSpilloverCacheTests.testComputeIfAbsentConcurrently (#14550) * Fix flaky test TieredSpilloverCacheTests.testComputeIfAbsentConcurrently Signed-off-by: Sagar Upadhyaya * Addressing comment Signed-off-by: Sagar Upadhyaya --------- Signed-off-by: Sagar Upadhyaya Signed-off-by: Sagar Upadhyaya Co-authored-by: Sagar Upadhyaya * Add allowlist setting for ingest-common processors (#14479) Add a new static setting that lets an operator choose specific ingest processors to enable by name. The behavior is as follows: - If the allowlist setting is not defined, all installed processors are enabled. This is the status quo. - If the allowlist setting is defined as the empty set, then all processors are disabled. - If the allowlist setting contains the names of valid processors, only those processors are enabled. - If the allowlist setting contains a name of a processor that does not exist, then the server will fail to start with an IllegalStateException listing which processors were defined in the allowlist but are not installed. - If the allowlist setting is changed between server restarts then any ingest pipeline using a now-disabled processor will fail. This is the same experience if a pipeline used a processor defined by a plugin but then that plugin were to be uninstalled across restarts. Related to #14439 Signed-off-by: Andrew Ross * Fix file cache initialization (#14004) * fix file cache initialization Signed-off-by: panguixin * changelog Signed-off-by: panguixin * add test Signed-off-by: panguixin --------- Signed-off-by: panguixin * Add Ashish Singh as maintainer (#14567) Signed-off-by: Bukhtawar Khan * Allow @InternalApi annotation on classes not meant to be constructed outside of the OpenSearch core (#14575) Signed-off-by: Andriy Redko * Bump com.azure:azure-storage-common from 12.21.2 to 12.25.1 in /plugins/repository-azure (#14517) * Bump com.azure:azure-storage-common in /plugins/repository-azure Bumps [com.azure:azure-storage-common](https://github.com/Azure/azure-sdk-for-java) from 12.21.2 to 12.25.1. - [Release notes](https://github.com/Azure/azure-sdk-for-java/releases) - [Commits](https://github.com/Azure/azure-sdk-for-java/compare/azure-storage-common_12.21.2...azure-storage-blob_12.25.1) --- updated-dependencies: - dependency-name: com.azure:azure-storage-common dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] Signed-off-by: Andriy Redko --------- Signed-off-by: dependabot[bot] Signed-off-by: Andriy Redko Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * Add allowlist setting for search-pipeline-common processors (#14562) Add a new static setting that lets an operator choose specific search pipeline processors to enable by name. The behavior is as follows: - If the allowlist setting is not defined, all installed processors are enabled. This is the status quo. - If the allowlist setting is defined as the empty set, then all processors are disabled. - If the allowlist setting contains the names of valid processors, only those processors are enabled. - If the allowlist setting contains a name of a processor that does not exist, then the server will fail to start with an IllegalStateException listing which processors were defined in the allowlist but are not installed. - If the allowlist setting is changed between server restarts then any ingest pipeline using a now-disabled processor will fail. This is the same experience if a pipeline used a processor defined by a plugin but then that plugin were to be uninstalled across restarts. A distinct setting exists for each of request, response, and search phase results processors. Related to #14439 Signed-off-by: Andrew Ross * Bump Apache Lucene to 9.11.1 (#14576) (#14581) (cherry picked from commit 0095fd1a44b583a7457b4d4578cdf6a0ed1fd2f3) Signed-off-by: Andriy Redko * Add unittests for RemoteClusterStateAttributesManager (#14427) * Add unittests for RemoteClusterStateAttributesManager Signed-off-by: Shivansh Arora * Add Ashish Singh to codeowners (#14592) Signed-off-by: Ashish Singh * Add batching processor base type AbstractBatchingProcessor (#14554) Signed-off-by: Liyun Xiu * Add @InternalApi annotation to japicmp exclusions (#14597) Signed-off-by: Andriy Redko * Fix issue 14519:Parsing a GetResult returns NPE if found field is mis… (#14552) * Fix issue 14519:Parsing a GetResult returns NPE if found field is missing. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:Parsing a GetResult returns NPE if found field is missing. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:Fix wildcart import. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:Fix wildcart import. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:Fix spotless issues. Signed-off-by: Vatsal Signed-off-by: vatsal * Fix issue 14519:update changelog Signed-off-by: vatsal --------- Signed-off-by: vatsal Signed-off-by: Daniel Widdis Co-authored-by: Daniel Widdis * Star tree mapping changes (#14605) * Star tree mapping changes with feature flag --------- Signed-off-by: Bharathwaj G * Bump com.microsoft.azure:msal4j from 1.15.1 to 1.16.0 in /plugins/repository-azure (#14610) * Bump com.microsoft.azure:msal4j in /plugins/repository-azure Bumps [com.microsoft.azure:msal4j](https://github.com/AzureAD/microsoft-authentication-library-for-java) from 1.15.1 to 1.16.0. - [Release notes](https://github.com/AzureAD/microsoft-authentication-library-for-java/releases) - [Changelog](https://github.com/AzureAD/microsoft-authentication-library-for-java/blob/dev/changelog.txt) - [Commits](https://github.com/AzureAD/microsoft-authentication-library-for-java/compare/v1.15.1...v1.16.0) --- updated-dependencies: - dependency-name: com.microsoft.azure:msal4j dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Updating SHAs Signed-off-by: dependabot[bot] * Update changelog Signed-off-by: dependabot[bot] --------- Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] * [Bugfix] Fix ICacheKeySerializerTests flakiness (#14564) * Fix testInvalidInput flakiness Signed-off-by: Peter Alfonsi * Addressed andrross's comment Signed-off-by: Peter Alfonsi * rerun security check Signed-off-by: Peter Alfonsi --------- Signed-off-by: Peter Alfonsi Co-authored-by: Peter Alfonsi * Correct typo in method name (#14621) Signed-off-by: vatsal * Refactoring FilterPath.parse by using an iterative approach instead of recursion. (#14200) * Refactor FilterPath parse function (#12067) Signed-off-by: Robin Friedmann * Implement unit tests for FilterPathTests (#12067) Signed-off-by: Robin Friedmann * Write warn log if Filter is empty; Add comments (#12067) Signed-off-by: Robin Friedmann * Add changelog Signed-off-by: Siddhant Deshmukh * Remove unnecessary log statement Signed-off-by: Siddhant Deshmukh * Remove unused logger Signed-off-by: Siddhant Deshmukh * Spotless apply Signed-off-by: Siddhant Deshmukh * Remove incorrect changelog Signed-off-by: Siddhant Deshmukh --------- Signed-off-by: Siddhant Deshmukh Co-authored-by: Robin Friedmann * Removing String format in RemoteStoreMigrationAllocationDecider to optimise performance(#14612) Signed-off-by: RS146BIJAY * Clear templates before Adding; Use NamedWriteableAwareStreamInput for RemoteCustomMetadata; Correct the check for deciding upload of HashesOfConsistentSettings (#14513) * Clear templates before Adding; Use NamedWriteableAwareStreamInput for RemoteCustomMetadata * Correct the check for deciding upload of hashes of consistent settings Signed-off-by: Sooraj Sinha * Improve reroute performance by optimising List.removeAll in LocalShardsBalancer to filter remote search shard from relocation decision (#14613) Signed-off-by: RS146BIJAY * Fix assertion failure while deleting remote backed index (#14601) Signed-off-by: Sachin Kale * OnHeap Star Tree Implementation Signed-off-by: Sarthak Aggarwal * addressed nits Signed-off-by: Sarthak Aggarwal * addressed major nits Signed-off-by: Sarthak Aggarwal * includes Count Aggregator Signed-off-by: Sarthak Aggarwal * handling for missing doc values Signed-off-by: Sarthak Aggarwal * addressing review comments Signed-off-by: Sarthak Aggarwal * rebasing with main Signed-off-by: Sarthak Aggarwal * support for empty sequential doc values iterator Signed-off-by: Sarthak Aggarwal * nits Signed-off-by: Sarthak Aggarwal * min and max star tree aggregators Signed-off-by: Sarthak Aggarwal * star tree file formats * Star tree codec changes Signed-off-by: Bharathwaj G * Adding tests Signed-off-by: Bharathwaj G * Addressing comments Signed-off-by: Bharathwaj G * addressing review comments Signed-off-by: Bharathwaj G * Star tree merge changes Signed-off-by: Bharathwaj G * fix annotations * star-tree file formats reader and javadoc fixes * read for composite index values Signed-off-by: Sarthak Aggarwal * doc values file format Signed-off-by: Sarthak Aggarwal --------- Signed-off-by: bowenlan-amzn Signed-off-by: Prudhvi Godithi Signed-off-by: Sagar Upadhyaya Signed-off-by: Sagar <99425694+sgup432@users.noreply.github.com> Signed-off-by: kkewwei kkewwei@163.com Signed-off-by: kkewwei Signed-off-by: Peter Alfonsi Signed-off-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Signed-off-by: Andrew Ross Signed-off-by: Sagar Upadhyaya Signed-off-by: panguixin Signed-off-by: Bukhtawar Khan Signed-off-by: Andriy Redko Signed-off-by: dependabot[bot] Signed-off-by: Shivansh Arora Signed-off-by: Ashish Singh Signed-off-by: Liyun Xiu Signed-off-by: vatsal Signed-off-by: Daniel Widdis Signed-off-by: Siddhant Deshmukh Signed-off-by: RS146BIJAY Signed-off-by: Sooraj Sinha Signed-off-by: Sachin Kale Signed-off-by: Sarthak Aggarwal Signed-off-by: Bharathwaj G Co-authored-by: bowenlan-amzn Co-authored-by: Prudhvi Godithi Co-authored-by: Sagar <99425694+sgup432@users.noreply.github.com> Co-authored-by: Sagar Upadhyaya Co-authored-by: kkewwei Co-authored-by: Marc Handalian Co-authored-by: Peter Alfonsi Co-authored-by: Peter Alfonsi Co-authored-by: opensearch-trigger-bot[bot] <98922864+opensearch-trigger-bot[bot]@users.noreply.github.com> Co-authored-by: opensearch-ci-bot <83309141+opensearch-ci-bot@users.noreply.github.com> Co-authored-by: Andrew Ross Co-authored-by: panguixin Co-authored-by: Bukhtawar Khan Co-authored-by: Andriy Redko Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: dependabot[bot] Co-authored-by: Shivansh Arora Co-authored-by: Ashish Singh Co-authored-by: Liyun Xiu Co-authored-by: Vatsal <36672090+imvtsl@users.noreply.github.com> Co-authored-by: Daniel Widdis Co-authored-by: Siddhant Deshmukh Co-authored-by: Robin Friedmann Co-authored-by: rishavz_sagar Co-authored-by: Sooraj Sinha <81695996+soosinha@users.noreply.github.com> Co-authored-by: Sachin Kale Co-authored-by: Sarthak Aggarwal --- .ci/bwcVersions | 1 + .github/CODEOWNERS | 4 +- .github/workflows/gradle-check.yml | 1 + CHANGELOG.md | 10 + MAINTAINERS.md | 1 + TESTING.md | 15 +- buildSrc/version.properties | 2 +- distribution/src/config/opensearch.yml | 4 + .../ApiAnnotationProcessorTests.java | 13 + .../processor/InternalApiAnnotated.java | 4 +- ...licApiConstructorAnnotatedInternalApi.java | 21 + ...cene-core-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...cene-core-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../src/main/java/org/opensearch/Version.java | 3 +- .../core/xcontent/filtering/FilterPath.java | 30 +- .../xcontent/filtering/FilterPathTests.java | 17 + .../common/tier/TieredSpilloverCache.java | 94 +- .../tier/TieredSpilloverCacheTests.java | 343 +++++++- .../common/IngestCommonModulePlugin.java | 39 +- .../common/IngestCommonModulePluginTests.java | 109 +++ ...pressions-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...pressions-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../SearchPipelineCommonModulePlugin.java | 102 ++- ...SearchPipelineCommonModulePluginTests.java | 106 +++ ...lysis-icu-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...lysis-icu-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...-kuromoji-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...-kuromoji-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...ysis-nori-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...ysis-nori-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...-phonetic-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...-phonetic-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...s-smartcn-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...s-smartcn-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...s-stempel-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...s-stempel-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...orfologik-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...orfologik-9.12.0-snapshot-c896995.jar.sha1 | 1 - plugins/repository-azure/build.gradle | 4 +- .../azure-storage-common-12.21.2.jar.sha1 | 1 - .../azure-storage-common-12.25.1.jar.sha1 | 1 + .../licenses/msal4j-1.15.1.jar.sha1 | 1 - .../licenses/msal4j-1.16.0.jar.sha1 | 1 + .../azure/AzureStorageService.java | 3 + .../plugin-metadata/plugin-security.policy | 1 + .../test/search.aggregation/10_histogram.yml | 5 + .../test/search.aggregation/230_composite.yml | 6 + .../330_auto_date_histogram.yml | 5 + .../test/search.aggregation/40_range.yml | 6 + server/build.gradle | 1 + ...is-common-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...is-common-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...rd-codecs-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...rd-codecs-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...cene-core-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...cene-core-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...-grouping-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...-grouping-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...ghlighter-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...ghlighter-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...cene-join-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...cene-join-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...ne-memory-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...ne-memory-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...cene-misc-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...cene-misc-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...e-queries-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...e-queries-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...eryparser-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...eryparser-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...e-sandbox-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...e-sandbox-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...al-extras-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...al-extras-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...spatial3d-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...spatial3d-9.12.0-snapshot-c896995.jar.sha1 | 1 - ...e-suggest-9.12.0-snapshot-847316d.jar.sha1 | 1 + ...e-suggest-9.12.0-snapshot-c896995.jar.sha1 | 1 - .../cluster/allocation/ClusterRerouteIT.java | 3 +- .../index/mapper/StarTreeMapperIT.java | 440 ++++++++++ .../RemoteMigrationIndexMetadataUpdateIT.java | 2 - .../opensearch/remotestore/RemoteStoreIT.java | 16 +- .../snapshots/SearchableSnapshotIT.java | 25 + .../Composite99DocValuesConsumer.java | 68 ++ .../lucene90/StarTree99DocValuesProducer.java | 158 ++++ .../lucene/index/BaseStarTreeBuilder.java | 818 ++++++++++++++++++ .../opensearch/cluster/metadata/Metadata.java | 1 + .../metadata/MetadataCreateIndexService.java | 5 + .../metadata/MetadataMappingService.java | 12 +- .../cluster/node/DiscoveryNode.java | 4 + .../allocator/LocalShardsBalancer.java | 21 +- ...RemoteStoreMigrationAllocationDecider.java | 34 +- .../common/settings/ClusterSettings.java | 6 +- .../common/settings/FeatureFlagSettings.java | 3 +- .../common/settings/IndexScopedSettings.java | 10 + .../opensearch/common/util/FeatureFlags.java | 10 +- .../remote/RemoteClusterStateService.java | 5 +- .../model/RemoteClusterStateCustoms.java | 2 +- .../remote/model/RemoteCustomMetadata.java | 5 +- .../org/opensearch/index/IndexModule.java | 10 +- .../org/opensearch/index/IndexService.java | 28 +- .../opensearch/index/codec/CodecService.java | 16 +- .../composite/Composite90DocValuesFormat.java | 94 ++ .../composite/Composite90DocValuesReader.java | 287 ++++++ .../composite/Composite90DocValuesWriter.java | 246 ++++++ .../codec/composite/Composite99Codec.java | 56 ++ .../composite/CompositeCodecFactory.java | 42 + .../composite/CompositeIndexFieldInfo.java | 36 + .../codec/composite/CompositeIndexReader.java | 33 + .../codec/composite/CompositeIndexValues.java | 21 + .../datacube/startree/StarTreeValues.java | 63 ++ .../datacube/startree/package-info.java | 12 + .../index/codec/composite/package-info.java | 12 + .../CompositeIndexConstants.java | 26 + .../CompositeIndexMetadata.java | 95 ++ .../CompositeIndexSettings.java | 55 ++ .../CompositeIndexValidator.java | 46 + .../datacube/DateDimension.java | 72 ++ .../compositeindex/datacube/Dimension.java | 22 + .../datacube/DimensionFactory.java | 99 +++ .../datacube/MergeDimension.java | 56 ++ .../index/compositeindex/datacube/Metric.java | 65 ++ .../compositeindex/datacube/MetricStat.java | 44 + .../datacube/NumericDimension.java | 57 ++ .../compositeindex/datacube/package-info.java | 11 + .../datacube/startree/StarTreeDocument.java | 34 + .../datacube/startree/StarTreeField.java | 94 ++ .../startree/StarTreeFieldConfiguration.java | 108 +++ .../startree/StarTreeIndexSettings.java | 116 +++ .../datacube/startree/StarTreeValidator.java | 94 ++ .../aggregators/CountValueAggregator.java | 65 ++ .../aggregators/MaxValueAggregator.java | 75 ++ .../aggregators/MetricAggregatorInfo.java | 119 +++ .../startree/aggregators/MetricEntry.java | 35 + .../aggregators/MinValueAggregator.java | 75 ++ .../aggregators/SumValueAggregator.java | 97 +++ .../startree/aggregators/ValueAggregator.java | 64 ++ .../aggregators/ValueAggregatorFactory.java | 64 ++ .../numerictype/StarTreeNumericType.java | 66 ++ .../StarTreeNumericTypeConverters.java | 58 ++ .../aggregators/numerictype/package-info.java | 14 + .../startree/aggregators/package-info.java | 14 + .../builder/OnHeapStarTreeBuilder.java | 310 +++++++ .../startree/builder/StarTreeBuilder.java | 58 ++ .../startree/builder/StarTreesBuilder.java | 148 ++++ .../startree/builder/package-info.java | 14 + .../startree/meta/StarTreeMetadata.java | 228 +++++ .../datacube/startree/meta/TreeMetadata.java | 112 +++ .../datacube/startree/meta/package-info.java | 14 + .../startree/node/OffHeapStarTree.java | 65 ++ .../startree/node/OffHeapStarTreeNode.java | 183 ++++ .../datacube/startree/node/StarTree.java | 23 + .../datacube/startree/node/StarTreeNode.java | 112 +++ .../datacube/startree/node/package-info.java | 14 + .../datacube/startree/package-info.java | 13 + .../utils/SequentialDocValuesIterator.java | 160 ++++ .../startree/utils/StarTreeBuilderUtils.java | 105 +++ .../utils/StarTreeDataSerializer.java | 138 +++ .../startree/utils/StarTreeHelper.java | 38 + .../utils/StarTreeMetaSerializer.java | 223 +++++ .../datacube/startree/utils/package-info.java | 14 + .../index/compositeindex/package-info.java | 14 + .../org/opensearch/index/get/GetResult.java | 10 + .../mapper/CompositeDataCubeFieldType.java | 56 ++ .../mapper/CompositeMappedFieldType.java | 82 ++ .../org/opensearch/index/mapper/Mapper.java | 5 + .../index/mapper/MapperService.java | 26 + .../opensearch/index/mapper/ObjectMapper.java | 106 ++- .../index/mapper/RootObjectMapper.java | 15 +- .../index/mapper/StarTreeMapper.java | 406 +++++++++ .../remote/utils/cache/SegmentedCache.java | 4 +- .../org/opensearch/indices/IndicesModule.java | 2 + .../opensearch/indices/IndicesService.java | 17 +- .../ingest/AbstractBatchingProcessor.java | 136 +++ .../org/opensearch/monitor/fs/FsProbe.java | 13 +- .../main/java/org/opensearch/node/Node.java | 91 +- .../services/org.apache.lucene.codecs.Codec | 1 + .../cluster/metadata/MetadataTests.java | 27 + .../serializer/ICacheKeySerializerTests.java | 7 +- .../stats/DefaultCacheStatsHolderTests.java | 75 +- .../env/NodeRepurposeCommandTests.java | 2 +- ...oteClusterStateAttributesManagerTests.java | 276 +++++- .../RemoteClusterStateServiceTests.java | 125 ++- .../model/RemoteCustomMetadataTests.java | 87 +- .../opensearch/index/codec/CodecTests.java | 47 + .../StarTreeDocValuesFormatTests.java | 207 +++++ .../CountValueAggregatorTests.java | 53 ++ .../aggregators/MaxValueAggregatorTests.java | 58 ++ .../MetricAggregatorInfoTests.java | 113 +++ .../aggregators/MinValueAggregatorTests.java | 58 ++ .../aggregators/SumValueAggregatorTests.java | 72 ++ .../ValueAggregatorFactoryTests.java | 27 + .../builder/BaseStarTreeBuilderTests.java | 254 ++++++ .../builder/OnHeapStarTreeBuilderTests.java | 806 +++++++++++++++++ .../builder/SequentialIteratorTests.java | 133 +++ .../opensearch/index/get/GetResultTests.java | 20 + .../index/mapper/ObjectMapperTests.java | 73 ++ .../index/mapper/StarTreeMapperTests.java | 767 ++++++++++++++++ .../AbstractBatchingProcessorTests.java | 160 ++++ .../java/org/opensearch/node/NodeTests.java | 2 +- .../index/mapper/MapperTestCase.java | 2 +- .../aggregations/AggregatorTestCase.java | 2 + .../opensearch/test/InternalTestCluster.java | 10 +- .../test/OpenSearchIntegTestCase.java | 16 +- 204 files changed, 11598 insertions(+), 281 deletions(-) create mode 100644 libs/common/src/test/resources/org/opensearch/common/annotation/processor/PublicApiConstructorAnnotatedInternalApi.java create mode 100644 libs/core/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 libs/core/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 modules/ingest-common/src/test/java/org/opensearch/ingest/common/IngestCommonModulePluginTests.java create mode 100644 modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java create mode 100644 plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-c896995.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/azure-storage-common-12.21.2.jar.sha1 create mode 100644 plugins/repository-azure/licenses/azure-storage-common-12.25.1.jar.sha1 delete mode 100644 plugins/repository-azure/licenses/msal4j-1.15.1.jar.sha1 create mode 100644 plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 create mode 100644 server/licenses/lucene-analysis-common-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-analysis-common-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-backward-codecs-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-backward-codecs-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-grouping-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-grouping-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-highlighter-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-highlighter-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-join-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-join-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-memory-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-memory-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-misc-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-misc-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-queries-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-queries-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-queryparser-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-queryparser-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-sandbox-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-sandbox-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-spatial-extras-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-spatial-extras-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-spatial3d-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-spatial3d-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/licenses/lucene-suggest-9.12.0-snapshot-847316d.jar.sha1 delete mode 100644 server/licenses/lucene-suggest-9.12.0-snapshot-c896995.jar.sha1 create mode 100644 server/src/internalClusterTest/java/org/opensearch/index/mapper/StarTreeMapperIT.java create mode 100644 server/src/main/java/org/apache/lucene/codecs/lucene90/Composite99DocValuesConsumer.java create mode 100644 server/src/main/java/org/apache/lucene/codecs/lucene90/StarTree99DocValuesProducer.java create mode 100644 server/src/main/java/org/apache/lucene/index/BaseStarTreeBuilder.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesFormat.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesReader.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesWriter.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/Composite99Codec.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/CompositeCodecFactory.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexFieldInfo.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexValues.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/codec/composite/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexConstants.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexMetadata.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexSettings.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexValidator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/DateDimension.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/Dimension.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/DimensionFactory.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/MergeDimension.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/Metric.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/MetricStat.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/NumericDimension.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeDocument.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeField.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeFieldConfiguration.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeIndexSettings.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeValidator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MaxValueAggregator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricEntry.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MinValueAggregator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericType.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericTypeConverters.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/StarTreeMetadata.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/TreeMetadata.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/OffHeapStarTree.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/OffHeapStarTreeNode.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTree.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTreeNode.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeBuilderUtils.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeDataSerializer.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeHelper.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeMetaSerializer.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/compositeindex/package-info.java create mode 100644 server/src/main/java/org/opensearch/index/mapper/CompositeDataCubeFieldType.java create mode 100644 server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java create mode 100644 server/src/main/java/org/opensearch/index/mapper/StarTreeMapper.java create mode 100644 server/src/main/java/org/opensearch/ingest/AbstractBatchingProcessor.java create mode 100644 server/src/main/resources/META-INF/services/org.apache.lucene.codecs.Codec create mode 100644 server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MaxValueAggregatorTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MinValueAggregatorTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java create mode 100644 server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/SequentialIteratorTests.java create mode 100644 server/src/test/java/org/opensearch/index/mapper/StarTreeMapperTests.java create mode 100644 server/src/test/java/org/opensearch/ingest/AbstractBatchingProcessorTests.java diff --git a/.ci/bwcVersions b/.ci/bwcVersions index 1f80ed34d6c10..a738eb54e17f6 100644 --- a/.ci/bwcVersions +++ b/.ci/bwcVersions @@ -34,4 +34,5 @@ BWC_VERSION: - "2.14.0" - "2.14.1" - "2.15.0" + - "2.15.1" - "2.16.0" diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 5a2d08756c49f..8ceecb3abb4a2 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -11,7 +11,7 @@ # 3. Use the command palette to run the CODEOWNERS: Show owners of current file command, which will display all code owners for the current file. # Default ownership for all repo files -* @anasalkouz @andrross @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +* @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah /modules/transport-netty4/ @peternied @@ -24,4 +24,4 @@ /.github/ @peternied -/MAINTAINERS.md @anasalkouz @andrross @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @peternied @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah +/MAINTAINERS.md @anasalkouz @andrross @ashking94 @Bukhtawar @CEHENKLE @dblock @dbwiddis @gbbafna @jed326 @kotwanikunal @mch2 @msfroh @nknize @owaiskazi19 @peternied @reta @Rishikesh1159 @sachinpkale @saratvemulapalli @shwetathareja @sohami @VachaShah diff --git a/.github/workflows/gradle-check.yml b/.github/workflows/gradle-check.yml index 2909ee95349ce..89d894403ff1a 100644 --- a/.github/workflows/gradle-check.yml +++ b/.github/workflows/gradle-check.yml @@ -113,6 +113,7 @@ jobs: if: success() uses: codecov/codecov-action@v4 with: + token: ${{ secrets.CODECOV_TOKEN }} files: ./codeCoverage.xml - name: Create Comment Success diff --git a/CHANGELOG.md b/CHANGELOG.md index cafe9c20e7ff4..14804bbd5974a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,7 +9,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - [Remote Store] Rate limiter for remote store low priority uploads ([#14374](https://github.com/opensearch-project/OpenSearch/pull/14374/)) - Apply the date histogram rewrite optimization to range aggregation ([#13865](https://github.com/opensearch-project/OpenSearch/pull/13865)) - [Writable Warm] Add composite directory implementation and integrate it with FileCache ([12782](https://github.com/opensearch-project/OpenSearch/pull/12782)) +- Add batching supported processor base type AbstractBatchingProcessor ([#14554](https://github.com/opensearch-project/OpenSearch/pull/14554)) - Fix race condition while parsing derived fields from search definition ([14445](https://github.com/opensearch-project/OpenSearch/pull/14445)) +- Add allowlist setting for ingest-common and search-pipeline-common processors ([#14439](https://github.com/opensearch-project/OpenSearch/issues/14439)) ### Dependencies - Bump `org.gradle.test-retry` from 1.5.8 to 1.5.9 ([#13442](https://github.com/opensearch-project/OpenSearch/pull/13442)) @@ -24,11 +26,16 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Bump `com.gradle.develocity` from 3.17.4 to 3.17.5 ([#14397](https://github.com/opensearch-project/OpenSearch/pull/14397)) - Bump `opentelemetry` from 1.36.0 to 1.39.0 ([#14457](https://github.com/opensearch-project/OpenSearch/pull/14457)) - Bump `azure-identity` from 1.11.4 to 1.13.0, Bump `msal4j` from 1.14.3 to 1.15.1, Bump `msal4j-persistence-extension` from 1.2.0 to 1.3.0 ([#14506](https://github.com/opensearch-project/OpenSearch/pull/14506)) +- Bump `com.azure:azure-storage-common` from 12.21.2 to 12.25.1 ([#14517](https://github.com/opensearch-project/OpenSearch/pull/14517)) +- Bump `com.microsoft.azure:msal4j` from 1.15.1 to 1.16.0 ([#14610](https://github.com/opensearch-project/OpenSearch/pull/14610)) ### Changed +- [Tiered Caching] Move query recomputation logic outside write lock ([#14187](https://github.com/opensearch-project/OpenSearch/pull/14187)) - unsignedLongRangeQuery now returns MatchNoDocsQuery if the lower bounds are greater than the upper bounds ([#14416](https://github.com/opensearch-project/OpenSearch/pull/14416)) - Updated the `indices.query.bool.max_clause_count` setting from being static to dynamically updateable ([#13568](https://github.com/opensearch-project/OpenSearch/pull/13568)) - Make the class CommunityIdProcessor final ([#14448](https://github.com/opensearch-project/OpenSearch/pull/14448)) +- Allow @InternalApi annotation on classes not meant to be constructed outside of the OpenSearch core ([#14575](https://github.com/opensearch-project/OpenSearch/pull/14575)) +- Add @InternalApi annotation to japicmp exclusions ([#14597](https://github.com/opensearch-project/OpenSearch/pull/14597)) ### Deprecated @@ -45,6 +52,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), - Fix fs info reporting negative available size ([#11573](https://github.com/opensearch-project/OpenSearch/pull/11573)) - Add ListPitInfo::getKeepAlive() getter ([#14495](https://github.com/opensearch-project/OpenSearch/pull/14495)) - Fix FuzzyQuery in keyword field will use IndexOrDocValuesQuery when both of index and doc_value are true ([#14378](https://github.com/opensearch-project/OpenSearch/pull/14378)) +- Fix file cache initialization ([#14004](https://github.com/opensearch-project/OpenSearch/pull/14004)) +- Handle NPE in GetResult if "found" field is missing ([#14552](https://github.com/opensearch-project/OpenSearch/pull/14552)) +- Refactoring FilterPath.parse by using an iterative approach ([#14200](https://github.com/opensearch-project/OpenSearch/pull/14200)) ### Security diff --git a/MAINTAINERS.md b/MAINTAINERS.md index 91b57a4cbc74e..3298ceb15463c 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -9,6 +9,7 @@ This document contains a list of maintainers in this repo. See [opensearch-proje | Anas Alkouz | [anasalkouz](https://github.com/anasalkouz) | Amazon | | Andrew Ross | [andrross](https://github.com/andrross) | Amazon | | Andriy Redko | [reta](https://github.com/reta) | Aiven | +| Ashish Singh | [ashking94](https://github.com/ashking94) | Amazon | | Bukhtawar Khan | [Bukhtawar](https://github.com/Bukhtawar) | Amazon | | Charlotte Henkle | [CEHENKLE](https://github.com/CEHENKLE) | Amazon | | Dan Widdis | [dbwiddis](https://github.com/dbwiddis) | Amazon | diff --git a/TESTING.md b/TESTING.md index e8416f61be7e1..de7ab3eefe2f8 100644 --- a/TESTING.md +++ b/TESTING.md @@ -17,6 +17,8 @@ OpenSearch uses [jUnit](https://junit.org/junit5/) for testing, it also uses ran - [Miscellaneous](#miscellaneous) - [Running verification tasks](#running-verification-tasks) - [Testing the REST layer](#testing-the-rest-layer) + - [Running REST Tests Against An External Cluster](#running-rest-tests-against-an-external-cluster) + - [Debugging REST Tests](#debugging-rest-tests) - [Testing packaging](#testing-packaging) - [Testing packaging on Windows](#testing-packaging-on-windows) - [Testing VMs are disposable](#testing-vms-are-disposable) @@ -272,7 +274,18 @@ yamlRestTest’s and javaRestTest’s are easy to identify, since they are found If in doubt about which command to use, simply run <gradle path>:check -Note that the REST tests, like all the integration tests, can be run against an external cluster by specifying the `tests.cluster` property, which if present needs to contain a comma separated list of nodes to connect to (e.g. localhost:9300). +## Running REST Tests Against An External Cluster + +Note that the REST tests, like all the integration tests, can be run against an external cluster by specifying the following properties `tests.cluster`, `tests.rest.cluster`, `tests.clustername`. Use a comma separated list of node properties for the multi-node cluster. + +For example : + + ./gradlew :rest-api-spec:yamlRestTest \ + -Dtests.cluster=localhost:9200 -Dtests.rest.cluster=localhost:9200 -Dtests.clustername=opensearch + +## Debugging REST Tests + +You can launch a local OpenSearch cluster in debug mode following [Launching and debugging from an IDE](#launching-and-debugging-from-an-ide), and run your REST tests against that following [Running REST Tests Against An External Cluster](#running-rest-tests-against-an-external-cluster). # Testing packaging diff --git a/buildSrc/version.properties b/buildSrc/version.properties index e9aa32ea9a4f5..a99bd4801b7f3 100644 --- a/buildSrc/version.properties +++ b/buildSrc/version.properties @@ -1,5 +1,5 @@ opensearch = 3.0.0 -lucene = 9.12.0-snapshot-c896995 +lucene = 9.12.0-snapshot-847316d bundled_jdk_vendor = adoptium bundled_jdk = 21.0.3+9 diff --git a/distribution/src/config/opensearch.yml b/distribution/src/config/opensearch.yml index 10bab9b3fce92..4115601f62ada 100644 --- a/distribution/src/config/opensearch.yml +++ b/distribution/src/config/opensearch.yml @@ -125,3 +125,7 @@ ${path.logs} # Gates the functionality of enabling Opensearch to use pluggable caches with respective store names via setting. # #opensearch.experimental.feature.pluggable.caching.enabled: false +# +# Gates the functionality of star tree index, which improves the performance of search aggregations. +# +#opensearch.experimental.feature.composite_index.star_tree.enabled: true diff --git a/libs/common/src/test/java/org/opensearch/common/annotation/processor/ApiAnnotationProcessorTests.java b/libs/common/src/test/java/org/opensearch/common/annotation/processor/ApiAnnotationProcessorTests.java index 8d8a4c7895339..52162e3df0c1c 100644 --- a/libs/common/src/test/java/org/opensearch/common/annotation/processor/ApiAnnotationProcessorTests.java +++ b/libs/common/src/test/java/org/opensearch/common/annotation/processor/ApiAnnotationProcessorTests.java @@ -473,4 +473,17 @@ public void testPublicApiWithProtectedInterface() { assertThat(failure.diagnotics(), not(hasItem(matching(Diagnostic.Kind.ERROR)))); } + + /** + * The constructor arguments have relaxed semantics at the moment: those could be not annotated or be annotated as {@link InternalApi} + */ + public void testPublicApiConstructorAnnotatedInternalApi() { + final CompilerResult result = compile("PublicApiConstructorAnnotatedInternalApi.java", "NotAnnotated.java"); + assertThat(result, instanceOf(Failure.class)); + + final Failure failure = (Failure) result; + assertThat(failure.diagnotics(), hasSize(2)); + + assertThat(failure.diagnotics(), not(hasItem(matching(Diagnostic.Kind.ERROR)))); + } } diff --git a/libs/common/src/test/resources/org/opensearch/common/annotation/processor/InternalApiAnnotated.java b/libs/common/src/test/resources/org/opensearch/common/annotation/processor/InternalApiAnnotated.java index 9996ba8b736aa..b0b542e127285 100644 --- a/libs/common/src/test/resources/org/opensearch/common/annotation/processor/InternalApiAnnotated.java +++ b/libs/common/src/test/resources/org/opensearch/common/annotation/processor/InternalApiAnnotated.java @@ -8,9 +8,9 @@ package org.opensearch.common.annotation.processor; -import org.opensearch.common.annotation.PublicApi; +import org.opensearch.common.annotation.InternalApi; -@PublicApi(since = "1.0.0") +@InternalApi public class InternalApiAnnotated { } diff --git a/libs/common/src/test/resources/org/opensearch/common/annotation/processor/PublicApiConstructorAnnotatedInternalApi.java b/libs/common/src/test/resources/org/opensearch/common/annotation/processor/PublicApiConstructorAnnotatedInternalApi.java new file mode 100644 index 0000000000000..d355a6b770391 --- /dev/null +++ b/libs/common/src/test/resources/org/opensearch/common/annotation/processor/PublicApiConstructorAnnotatedInternalApi.java @@ -0,0 +1,21 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.common.annotation.processor; + +import org.opensearch.common.annotation.InternalApi; +import org.opensearch.common.annotation.PublicApi; + +@PublicApi(since = "1.0.0") +public class PublicApiConstructorAnnotatedInternalApi { + /** + * The constructors have relaxed semantics at the moment: those could be not annotated or be annotated as {@link InternalApi} + */ + @InternalApi + public PublicApiConstructorAnnotatedInternalApi(NotAnnotated arg) {} +} diff --git a/libs/core/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 b/libs/core/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..e3fd1708ea428 --- /dev/null +++ b/libs/core/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +51ff4940eb1024184bbaa5dae39695d2392c5bab \ No newline at end of file diff --git a/libs/core/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 b/libs/core/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 299283562fddc..0000000000000 --- a/libs/core/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -826b328c37ea7f27c05d685db03bf8d2b00457ff \ No newline at end of file diff --git a/libs/core/src/main/java/org/opensearch/Version.java b/libs/core/src/main/java/org/opensearch/Version.java index 0cb2d4f867c12..b647a92d6708a 100644 --- a/libs/core/src/main/java/org/opensearch/Version.java +++ b/libs/core/src/main/java/org/opensearch/Version.java @@ -105,7 +105,8 @@ public class Version implements Comparable, ToXContentFragment { public static final Version V_2_14_0 = new Version(2140099, org.apache.lucene.util.Version.LUCENE_9_10_0); public static final Version V_2_14_1 = new Version(2140199, org.apache.lucene.util.Version.LUCENE_9_10_0); public static final Version V_2_15_0 = new Version(2150099, org.apache.lucene.util.Version.LUCENE_9_10_0); - public static final Version V_2_16_0 = new Version(2160099, org.apache.lucene.util.Version.LUCENE_9_11_0); + public static final Version V_2_15_1 = new Version(2150199, org.apache.lucene.util.Version.LUCENE_9_10_0); + public static final Version V_2_16_0 = new Version(2160099, org.apache.lucene.util.Version.LUCENE_9_11_1); public static final Version V_3_0_0 = new Version(3000099, org.apache.lucene.util.Version.LUCENE_9_12_0); public static final Version CURRENT = V_3_0_0; diff --git a/libs/core/src/main/java/org/opensearch/core/xcontent/filtering/FilterPath.java b/libs/core/src/main/java/org/opensearch/core/xcontent/filtering/FilterPath.java index 5389538a8c7dd..b8da9787165f8 100644 --- a/libs/core/src/main/java/org/opensearch/core/xcontent/filtering/FilterPath.java +++ b/libs/core/src/main/java/org/opensearch/core/xcontent/filtering/FilterPath.java @@ -46,7 +46,6 @@ public class FilterPath { static final FilterPath EMPTY = new FilterPath(); - private final String filter; private final String segment; private final FilterPath next; @@ -99,32 +98,29 @@ public static FilterPath[] compile(Set filters) { List paths = new ArrayList<>(); for (String filter : filters) { - if (filter != null) { + if (filter != null && !filter.isEmpty()) { filter = filter.trim(); if (filter.length() > 0) { - paths.add(parse(filter, filter)); + paths.add(parse(filter)); } } } return paths.toArray(new FilterPath[0]); } - private static FilterPath parse(final String filter, final String segment) { - int end = segment.length(); - - for (int i = 0; i < end;) { - char c = segment.charAt(i); + private static FilterPath parse(final String filter) { + // Split the filter into segments using a regex + // that avoids splitting escaped dots. + String[] segments = filter.split("(?= 0; i--) { + // Replace escaped dots with actual dots in the current segment. + String segment = segments[i].replaceAll("\\\\.", "."); + next = new FilterPath(filter, segment, next); } - return new FilterPath(filter, segment.replaceAll("\\\\.", "."), EMPTY); + + return next; } @Override diff --git a/libs/core/src/test/java/org/opensearch/core/xcontent/filtering/FilterPathTests.java b/libs/core/src/test/java/org/opensearch/core/xcontent/filtering/FilterPathTests.java index 0c5a17b70a956..d3191609f6119 100644 --- a/libs/core/src/test/java/org/opensearch/core/xcontent/filtering/FilterPathTests.java +++ b/libs/core/src/test/java/org/opensearch/core/xcontent/filtering/FilterPathTests.java @@ -35,6 +35,7 @@ import org.opensearch.common.util.set.Sets; import org.opensearch.test.OpenSearchTestCase; +import java.util.HashSet; import java.util.Set; import static java.util.Collections.singleton; @@ -369,4 +370,20 @@ public void testMultipleFilterPaths() { assertThat(filterPath.getSegment(), is(emptyString())); assertSame(filterPath, FilterPath.EMPTY); } + + public void testCompileWithEmptyString() { + Set filters = new HashSet<>(); + filters.add(""); + FilterPath[] filterPaths = FilterPath.compile(filters); + assertNotNull(filterPaths); + assertEquals(0, filterPaths.length); + } + + public void testCompileWithNull() { + Set filters = new HashSet<>(); + filters.add(null); + FilterPath[] filterPaths = FilterPath.compile(filters); + assertNotNull(filterPaths); + assertEquals(0, filterPaths.length); + } } diff --git a/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java b/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java index 63cdbca101f2a..f69c56808b2a1 100644 --- a/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java +++ b/modules/cache-common/src/main/java/org/opensearch/cache/common/tier/TieredSpilloverCache.java @@ -8,6 +8,8 @@ package org.opensearch.cache.common.tier; +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; import org.opensearch.cache.common.policy.TookTimePolicy; import org.opensearch.common.annotation.ExperimentalApi; import org.opensearch.common.cache.CacheType; @@ -35,9 +37,13 @@ import java.util.Map; import java.util.NoSuchElementException; import java.util.Objects; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.ConcurrentHashMap; +import java.util.concurrent.ExecutionException; import java.util.concurrent.atomic.AtomicBoolean; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; +import java.util.function.BiFunction; import java.util.function.Function; import java.util.function.Predicate; import java.util.function.ToLongBiFunction; @@ -61,6 +67,7 @@ public class TieredSpilloverCache implements ICache { // Used to avoid caching stale entries in lower tiers. private static final List SPILLOVER_REMOVAL_REASONS = List.of(RemovalReason.EVICTED, RemovalReason.CAPACITY); + private static final Logger logger = LogManager.getLogger(TieredSpilloverCache.class); private final ICache diskCache; private final ICache onHeapCache; @@ -86,6 +93,12 @@ public class TieredSpilloverCache implements ICache { private final Map, TierInfo> caches; private final List> policies; + /** + * This map is used to handle concurrent requests for same key in computeIfAbsent() to ensure we load the value + * only once. + */ + Map, CompletableFuture, V>>> completableFutureMap = new ConcurrentHashMap<>(); + TieredSpilloverCache(Builder builder) { Objects.requireNonNull(builder.onHeapCacheFactory, "onHeap cache builder can't be null"); Objects.requireNonNull(builder.diskCacheFactory, "disk cache builder can't be null"); @@ -182,7 +195,16 @@ public V computeIfAbsent(ICacheKey key, LoadAwareCacheLoader, V> // and it only has to be loaded one time, we should report one miss and the rest hits. But, if we do stats in // getValueFromTieredCache(), // we will see all misses. Instead, handle stats in computeIfAbsent(). - Tuple cacheValueTuple = getValueFromTieredCache(false).apply(key); + Tuple cacheValueTuple; + CompletableFuture, V>> future = null; + try (ReleasableLock ignore = readLock.acquire()) { + cacheValueTuple = getValueFromTieredCache(false).apply(key); + if (cacheValueTuple == null) { + // Only one of the threads will succeed putting a future into map for the same key. + // Rest will fetch existing future and wait on that to complete. + future = completableFutureMap.putIfAbsent(key, new CompletableFuture<>()); + } + } List heapDimensionValues = statsHolder.getDimensionsWithTierValue(key.dimensions, TIER_DIMENSION_VALUE_ON_HEAP); List diskDimensionValues = statsHolder.getDimensionsWithTierValue(key.dimensions, TIER_DIMENSION_VALUE_DISK); @@ -190,10 +212,7 @@ public V computeIfAbsent(ICacheKey key, LoadAwareCacheLoader, V> // Add the value to the onHeap cache. We are calling computeIfAbsent which does another get inside. // This is needed as there can be many requests for the same key at the same time and we only want to load // the value once. - V value = null; - try (ReleasableLock ignore = writeLock.acquire()) { - value = onHeapCache.computeIfAbsent(key, loader); - } + V value = compute(key, loader, future); // Handle stats if (loader.isLoaded()) { // The value was just computed and added to the cache by this thread. Register a miss for the heap cache, and the disk cache @@ -222,6 +241,55 @@ public V computeIfAbsent(ICacheKey key, LoadAwareCacheLoader, V> return cacheValueTuple.v1(); } + private V compute(ICacheKey key, LoadAwareCacheLoader, V> loader, CompletableFuture, V>> future) + throws Exception { + // Handler to handle results post processing. Takes a tuple or exception as an input and returns + // the value. Also before returning value, puts the value in cache. + BiFunction, V>, Throwable, Void> handler = (pair, ex) -> { + if (pair != null) { + try (ReleasableLock ignore = writeLock.acquire()) { + onHeapCache.put(pair.v1(), pair.v2()); + } catch (Exception e) { + // TODO: Catch specific exceptions to know whether this resulted from cache or underlying removal + // listeners/stats. Needs better exception handling at underlying layers.For now swallowing + // exception. + logger.warn("Exception occurred while putting item onto heap cache", e); + } + } else { + if (ex != null) { + logger.warn("Exception occurred while trying to compute the value", ex); + } + } + completableFutureMap.remove(key);// Remove key from map as not needed anymore. + return null; + }; + V value = null; + if (future == null) { + future = completableFutureMap.get(key); + future.handle(handler); + try { + value = loader.load(key); + } catch (Exception ex) { + future.completeExceptionally(ex); + throw new ExecutionException(ex); + } + if (value == null) { + NullPointerException npe = new NullPointerException("Loader returned a null value"); + future.completeExceptionally(npe); + throw new ExecutionException(npe); + } else { + future.complete(new Tuple<>(key, value)); + } + } else { + try { + value = future.get().v2(); + } catch (InterruptedException ex) { + throw new IllegalStateException(ex); + } + } + return value; + } + @Override public void invalidate(ICacheKey key) { // We are trying to invalidate the key from all caches though it would be present in only of them. @@ -328,12 +396,22 @@ void handleRemovalFromHeapTier(RemovalNotification, V> notification ICacheKey key = notification.getKey(); boolean wasEvicted = SPILLOVER_REMOVAL_REASONS.contains(notification.getRemovalReason()); boolean countEvictionTowardsTotal = false; // Don't count this eviction towards the cache's total if it ends up in the disk tier - if (caches.get(diskCache).isEnabled() && wasEvicted && evaluatePolicies(notification.getValue())) { + boolean exceptionOccurredOnDiskCachePut = false; + boolean canCacheOnDisk = caches.get(diskCache).isEnabled() && wasEvicted && evaluatePolicies(notification.getValue()); + if (canCacheOnDisk) { try (ReleasableLock ignore = writeLock.acquire()) { diskCache.put(key, notification.getValue()); // spill over to the disk tier and increment its stats + } catch (Exception ex) { + // TODO: Catch specific exceptions. Needs better exception handling. We are just swallowing exception + // in this case as it shouldn't cause upstream request to fail. + logger.warn("Exception occurred while putting item to disk cache", ex); + exceptionOccurredOnDiskCachePut = true; } - updateStatsOnPut(TIER_DIMENSION_VALUE_DISK, key, notification.getValue()); - } else { + if (!exceptionOccurredOnDiskCachePut) { + updateStatsOnPut(TIER_DIMENSION_VALUE_DISK, key, notification.getValue()); + } + } + if (!canCacheOnDisk || exceptionOccurredOnDiskCachePut) { // If the value is not going to the disk cache, send this notification to the TSC's removal listener // as the value is leaving the TSC entirely removalListener.onRemoval(notification); diff --git a/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java b/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java index 54b15f236a418..c6440a1e1797f 100644 --- a/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java +++ b/modules/cache-common/src/test/java/org/opensearch/cache/common/tier/TieredSpilloverCacheTests.java @@ -44,8 +44,12 @@ import java.util.UUID; import java.util.concurrent.CopyOnWriteArrayList; import java.util.concurrent.CountDownLatch; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; import java.util.concurrent.Phaser; import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicInteger; import java.util.concurrent.atomic.AtomicReference; import java.util.function.Function; import java.util.function.Predicate; @@ -56,6 +60,10 @@ import static org.opensearch.cache.common.tier.TieredSpilloverCacheStatsHolder.TIER_DIMENSION_VALUE_DISK; import static org.opensearch.cache.common.tier.TieredSpilloverCacheStatsHolder.TIER_DIMENSION_VALUE_ON_HEAP; import static org.opensearch.common.cache.store.settings.OpenSearchOnHeapCacheSettings.MAXIMUM_SIZE_IN_BYTES_KEY; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.doThrow; +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; public class TieredSpilloverCacheTests extends OpenSearchTestCase { static final List dimensionNames = List.of("dim1", "dim2", "dim3"); @@ -408,6 +416,7 @@ public void testComputeIfAbsentWithEvictionsFromOnHeapCache() throws Exception { assertEquals(onHeapCacheHit, getHitsForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); assertEquals(cacheMiss + numOfItems1, getMissesForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_DISK)); assertEquals(diskCacheHit, getHitsForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_DISK)); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); } public void testComputeIfAbsentWithEvictionsFromTieredCache() throws Exception { @@ -751,7 +760,7 @@ public void testInvalidateAll() throws Exception { } public void testComputeIfAbsentConcurrently() throws Exception { - int onHeapCacheSize = randomIntBetween(100, 300); + int onHeapCacheSize = randomIntBetween(500, 700); int diskCacheSize = randomIntBetween(200, 400); int keyValueSize = 50; @@ -773,7 +782,7 @@ public void testComputeIfAbsentConcurrently() throws Exception { 0 ); - int numberOfSameKeys = randomIntBetween(10, onHeapCacheSize - 1); + int numberOfSameKeys = randomIntBetween(400, onHeapCacheSize - 1); ICacheKey key = getICacheKey(UUID.randomUUID().toString()); String value = UUID.randomUUID().toString(); @@ -802,7 +811,7 @@ public String load(ICacheKey key) { }; loadAwareCacheLoaderList.add(loadAwareCacheLoader); phaser.arriveAndAwaitAdvance(); - tieredSpilloverCache.computeIfAbsent(key, loadAwareCacheLoader); + assertEquals(value, tieredSpilloverCache.computeIfAbsent(key, loadAwareCacheLoader)); } catch (Exception e) { throw new RuntimeException(e); } @@ -811,7 +820,7 @@ public String load(ICacheKey key) { threads[i].start(); } phaser.arriveAndAwaitAdvance(); - countDownLatch.await(); // Wait for rest of tasks to be cancelled. + countDownLatch.await(); int numberOfTimesKeyLoaded = 0; assertEquals(numberOfSameKeys, loadAwareCacheLoaderList.size()); for (int i = 0; i < loadAwareCacheLoaderList.size(); i++) { @@ -824,6 +833,215 @@ public String load(ICacheKey key) { // We should see only one heap miss, and the rest hits assertEquals(1, getMissesForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); assertEquals(numberOfSameKeys - 1, getHitsForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + } + + public void testComputIfAbsentConcurrentlyWithMultipleKeys() throws Exception { + int onHeapCacheSize = randomIntBetween(300, 500); + int diskCacheSize = randomIntBetween(600, 700); + int keyValueSize = 50; + + MockCacheRemovalListener removalListener = new MockCacheRemovalListener<>(); + Settings settings = Settings.builder() + .put( + OpenSearchOnHeapCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE) + .get(MAXIMUM_SIZE_IN_BYTES_KEY) + .getKey(), + onHeapCacheSize * keyValueSize + "b" + ) + .build(); + + TieredSpilloverCache tieredSpilloverCache = initializeTieredSpilloverCache( + keyValueSize, + diskCacheSize, + removalListener, + settings, + 0 + ); + + int iterations = 10; + int numberOfKeys = 20; + List> iCacheKeyList = new ArrayList<>(); + for (int i = 0; i < numberOfKeys; i++) { + ICacheKey key = getICacheKey(UUID.randomUUID().toString()); + iCacheKeyList.add(key); + } + ExecutorService executorService = Executors.newFixedThreadPool(8); + CountDownLatch countDownLatch = new CountDownLatch(iterations * numberOfKeys); // To wait for all threads to finish. + + List, String>> loadAwareCacheLoaderList = new CopyOnWriteArrayList<>(); + for (int j = 0; j < numberOfKeys; j++) { + int finalJ = j; + for (int i = 0; i < iterations; i++) { + executorService.submit(() -> { + try { + LoadAwareCacheLoader, String> loadAwareCacheLoader = new LoadAwareCacheLoader<>() { + boolean isLoaded = false; + + @Override + public boolean isLoaded() { + return isLoaded; + } + + @Override + public String load(ICacheKey key) { + isLoaded = true; + return iCacheKeyList.get(finalJ).key; + } + }; + loadAwareCacheLoaderList.add(loadAwareCacheLoader); + tieredSpilloverCache.computeIfAbsent(iCacheKeyList.get(finalJ), loadAwareCacheLoader); + } catch (Exception e) { + throw new RuntimeException(e); + } finally { + countDownLatch.countDown(); + } + }); + } + } + countDownLatch.await(); + int numberOfTimesKeyLoaded = 0; + assertEquals(iterations * numberOfKeys, loadAwareCacheLoaderList.size()); + for (int i = 0; i < loadAwareCacheLoaderList.size(); i++) { + LoadAwareCacheLoader, String> loader = loadAwareCacheLoaderList.get(i); + if (loader.isLoaded()) { + numberOfTimesKeyLoaded++; + } + } + assertEquals(numberOfKeys, numberOfTimesKeyLoaded); // It should be loaded only once. + // We should see only one heap miss, and the rest hits + assertEquals(numberOfKeys, getMissesForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); + assertEquals((iterations * numberOfKeys) - numberOfKeys, getHitsForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_ON_HEAP)); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + executorService.shutdownNow(); + } + + public void testComputeIfAbsentConcurrentlyAndThrowsException() throws Exception { + LoadAwareCacheLoader, String> loadAwareCacheLoader = new LoadAwareCacheLoader<>() { + boolean isLoaded = false; + + @Override + public boolean isLoaded() { + return isLoaded; + } + + @Override + public String load(ICacheKey key) { + throw new RuntimeException("Testing"); + } + }; + verifyComputeIfAbsentThrowsException(RuntimeException.class, loadAwareCacheLoader, "Testing"); + } + + @SuppressWarnings({ "rawtypes", "unchecked" }) + public void testComputeIfAbsentWithOnHeapCacheThrowingExceptionOnPut() throws Exception { + int onHeapCacheSize = randomIntBetween(100, 300); + int diskCacheSize = randomIntBetween(200, 400); + int keyValueSize = 50; + + MockCacheRemovalListener removalListener = new MockCacheRemovalListener<>(); + Settings settings = Settings.builder() + .put( + OpenSearchOnHeapCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE) + .get(MAXIMUM_SIZE_IN_BYTES_KEY) + .getKey(), + onHeapCacheSize * keyValueSize + "b" + ) + .build(); + ICache.Factory onHeapCacheFactory = mock(OpenSearchOnHeapCache.OpenSearchOnHeapCacheFactory.class); + ICache mockOnHeapCache = mock(ICache.class); + when(onHeapCacheFactory.create(any(), any(), any())).thenReturn(mockOnHeapCache); + doThrow(new RuntimeException("Testing")).when(mockOnHeapCache).put(any(), any()); + CacheConfig cacheConfig = getCacheConfig(keyValueSize, settings, removalListener); + ICache.Factory mockDiskCacheFactory = new MockDiskCache.MockDiskCacheFactory(0, diskCacheSize, false); + + TieredSpilloverCache tieredSpilloverCache = getTieredSpilloverCache( + onHeapCacheFactory, + mockDiskCacheFactory, + cacheConfig, + null, + removalListener + ); + String value = ""; + value = tieredSpilloverCache.computeIfAbsent(getICacheKey("test"), new LoadAwareCacheLoader<>() { + @Override + public boolean isLoaded() { + return false; + } + + @Override + public String load(ICacheKey key) { + return "test"; + } + }); + assertEquals("test", value); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + } + + @SuppressWarnings({ "rawtypes", "unchecked" }) + public void testComputeIfAbsentWithDiskCacheThrowingExceptionOnPut() throws Exception { + int onHeapCacheSize = 0; + int keyValueSize = 50; + + MockCacheRemovalListener removalListener = new MockCacheRemovalListener<>(); + Settings settings = Settings.builder() + .put( + OpenSearchOnHeapCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE) + .get(MAXIMUM_SIZE_IN_BYTES_KEY) + .getKey(), + onHeapCacheSize * keyValueSize + "b" + ) + .build(); + ICache.Factory onHeapCacheFactory = new OpenSearchOnHeapCache.OpenSearchOnHeapCacheFactory(); + CacheConfig cacheConfig = getCacheConfig(keyValueSize, settings, removalListener); + ICache.Factory mockDiskCacheFactory = mock(MockDiskCache.MockDiskCacheFactory.class); + ICache mockDiskCache = mock(ICache.class); + when(mockDiskCacheFactory.create(any(), any(), any())).thenReturn(mockDiskCache); + doThrow(new RuntimeException("Test")).when(mockDiskCache).put(any(), any()); + + TieredSpilloverCache tieredSpilloverCache = getTieredSpilloverCache( + onHeapCacheFactory, + mockDiskCacheFactory, + cacheConfig, + null, + removalListener + ); + + String response = ""; + response = tieredSpilloverCache.computeIfAbsent(getICacheKey("test"), new LoadAwareCacheLoader<>() { + @Override + public boolean isLoaded() { + return false; + } + + @Override + public String load(ICacheKey key) { + return "test"; + } + }); + ImmutableCacheStats diskStats = getStatsSnapshotForTier(tieredSpilloverCache, TIER_DIMENSION_VALUE_DISK); + + assertEquals(0, diskStats.getSizeInBytes()); + assertEquals(1, removalListener.evictionsMetric.count()); + assertEquals("test", response); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + } + + public void testComputeIfAbsentConcurrentlyWithLoaderReturningNull() throws Exception { + LoadAwareCacheLoader, String> loadAwareCacheLoader = new LoadAwareCacheLoader<>() { + boolean isLoaded = false; + + @Override + public boolean isLoaded() { + return isLoaded; + } + + @Override + public String load(ICacheKey key) { + return null; + } + }; + verifyComputeIfAbsentThrowsException(NullPointerException.class, loadAwareCacheLoader, "Loader returned a null value"); } public void testConcurrencyForEvictionFlowFromOnHeapToDiskTier() throws Exception { @@ -1408,6 +1626,26 @@ public boolean isLoaded() { }; } + private TieredSpilloverCache getTieredSpilloverCache( + ICache.Factory onHeapCacheFactory, + ICache.Factory mockDiskCacheFactory, + CacheConfig cacheConfig, + List> policies, + RemovalListener, String> removalListener + ) { + TieredSpilloverCache.Builder builder = new TieredSpilloverCache.Builder().setCacheType( + CacheType.INDICES_REQUEST_CACHE + ) + .setRemovalListener(removalListener) + .setOnHeapCacheFactory(onHeapCacheFactory) + .setDiskCacheFactory(mockDiskCacheFactory) + .setCacheConfig(cacheConfig); + if (policies != null) { + builder.addPolicies(policies); + } + return builder.build(); + } + private TieredSpilloverCache initializeTieredSpilloverCache( int keyValueSize, int diskCacheSize, @@ -1450,17 +1688,34 @@ private TieredSpilloverCache intializeTieredSpilloverCache( .build(); ICache.Factory mockDiskCacheFactory = new MockDiskCache.MockDiskCacheFactory(diskDeliberateDelay, diskCacheSize, false); - TieredSpilloverCache.Builder builder = new TieredSpilloverCache.Builder().setCacheType( - CacheType.INDICES_REQUEST_CACHE - ) + return getTieredSpilloverCache(onHeapCacheFactory, mockDiskCacheFactory, cacheConfig, policies, removalListener); + } + + private CacheConfig getCacheConfig( + int keyValueSize, + Settings settings, + RemovalListener, String> removalListener + ) { + return new CacheConfig.Builder().setKeyType(String.class) + .setKeyType(String.class) + .setWeigher((k, v) -> keyValueSize) + .setSettings(settings) + .setDimensionNames(dimensionNames) .setRemovalListener(removalListener) - .setOnHeapCacheFactory(onHeapCacheFactory) - .setDiskCacheFactory(mockDiskCacheFactory) - .setCacheConfig(cacheConfig); - if (policies != null) { - builder.addPolicies(policies); - } - return builder.build(); + .setKeySerializer(new StringSerializer()) + .setValueSerializer(new StringSerializer()) + .setSettings( + Settings.builder() + .put( + CacheSettings.getConcreteStoreNameSettingForCacheType(CacheType.INDICES_REQUEST_CACHE).getKey(), + TieredSpilloverCache.TieredSpilloverCacheFactory.TIERED_SPILLOVER_CACHE_NAME + ) + .put(FeatureFlags.PLUGGABLE_CACHE, "true") + .put(settings) + .build() + ) + .setClusterSettings(clusterSettings) + .build(); } // Helper functions for extracting tier aggregated stats. @@ -1501,6 +1756,66 @@ private ImmutableCacheStats getStatsSnapshotForTier(TieredSpilloverCache t return snapshot; } + private void verifyComputeIfAbsentThrowsException( + Class expectedException, + LoadAwareCacheLoader, String> loader, + String expectedExceptionMessage + ) throws InterruptedException { + int onHeapCacheSize = randomIntBetween(100, 300); + int diskCacheSize = randomIntBetween(200, 400); + int keyValueSize = 50; + + MockCacheRemovalListener removalListener = new MockCacheRemovalListener<>(); + Settings settings = Settings.builder() + .put( + OpenSearchOnHeapCacheSettings.getSettingListForCacheType(CacheType.INDICES_REQUEST_CACHE) + .get(MAXIMUM_SIZE_IN_BYTES_KEY) + .getKey(), + onHeapCacheSize * keyValueSize + "b" + ) + .build(); + + TieredSpilloverCache tieredSpilloverCache = initializeTieredSpilloverCache( + keyValueSize, + diskCacheSize, + removalListener, + settings, + 0 + ); + + int numberOfSameKeys = randomIntBetween(10, onHeapCacheSize - 1); + ICacheKey key = getICacheKey(UUID.randomUUID().toString()); + String value = UUID.randomUUID().toString(); + AtomicInteger exceptionCount = new AtomicInteger(); + + Thread[] threads = new Thread[numberOfSameKeys]; + Phaser phaser = new Phaser(numberOfSameKeys + 1); + CountDownLatch countDownLatch = new CountDownLatch(numberOfSameKeys); // To wait for all threads to finish. + + for (int i = 0; i < numberOfSameKeys; i++) { + threads[i] = new Thread(() -> { + try { + phaser.arriveAndAwaitAdvance(); + tieredSpilloverCache.computeIfAbsent(key, loader); + } catch (Exception e) { + exceptionCount.incrementAndGet(); + assertEquals(ExecutionException.class, e.getClass()); + assertEquals(expectedException, e.getCause().getClass()); + assertEquals(expectedExceptionMessage, e.getCause().getMessage()); + } finally { + countDownLatch.countDown(); + } + }); + threads[i].start(); + } + phaser.arriveAndAwaitAdvance(); + countDownLatch.await(); // Wait for rest of tasks to be cancelled. + + // Verify exception count was equal to number of requests + assertEquals(numberOfSameKeys, exceptionCount.get()); + assertEquals(0, tieredSpilloverCache.completableFutureMap.size()); + } + private ImmutableCacheStats getTotalStatsSnapshot(TieredSpilloverCache tsc) throws IOException { ImmutableCacheStatsHolder cacheStats = tsc.stats(new String[0]); return cacheStats.getStatsForDimensionValues(List.of()); diff --git a/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java b/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java index 162934efa6778..5b2db9ff940e7 100644 --- a/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java +++ b/modules/ingest-common/src/main/java/org/opensearch/ingest/common/IngestCommonModulePlugin.java @@ -58,10 +58,20 @@ import java.util.HashMap; import java.util.List; import java.util.Map; +import java.util.Set; +import java.util.function.Function; import java.util.function.Supplier; +import java.util.stream.Collectors; public class IngestCommonModulePlugin extends Plugin implements ActionPlugin, IngestPlugin { + static final Setting> PROCESSORS_ALLOWLIST_SETTING = Setting.listSetting( + "ingest.common.processors.allowed", + List.of(), + Function.identity(), + Setting.Property.NodeScope + ); + static final Setting WATCHDOG_INTERVAL = Setting.timeSetting( "ingest.grok.watchdog.interval", TimeValue.timeValueSeconds(1), @@ -77,7 +87,7 @@ public IngestCommonModulePlugin() {} @Override public Map getProcessors(Processor.Parameters parameters) { - Map processors = new HashMap<>(); + final Map processors = new HashMap<>(); processors.put(DateProcessor.TYPE, new DateProcessor.Factory(parameters.scriptService)); processors.put(SetProcessor.TYPE, new SetProcessor.Factory(parameters.scriptService)); processors.put(AppendProcessor.TYPE, new AppendProcessor.Factory(parameters.scriptService)); @@ -110,7 +120,7 @@ public Map getProcessors(Processor.Parameters paramet processors.put(RemoveByPatternProcessor.TYPE, new RemoveByPatternProcessor.Factory()); processors.put(CommunityIdProcessor.TYPE, new CommunityIdProcessor.Factory()); processors.put(FingerprintProcessor.TYPE, new FingerprintProcessor.Factory()); - return Collections.unmodifiableMap(processors); + return filterForAllowlistSetting(parameters.env.settings(), processors); } @Override @@ -133,7 +143,7 @@ public List getRestHandlers( @Override public List> getSettings() { - return Arrays.asList(WATCHDOG_INTERVAL, WATCHDOG_MAX_EXECUTION_TIME); + return Arrays.asList(WATCHDOG_INTERVAL, WATCHDOG_MAX_EXECUTION_TIME, PROCESSORS_ALLOWLIST_SETTING); } private static MatcherWatchdog createGrokThreadWatchdog(Processor.Parameters parameters) { @@ -147,4 +157,27 @@ private static MatcherWatchdog createGrokThreadWatchdog(Processor.Parameters par ); } + private Map filterForAllowlistSetting(Settings settings, Map map) { + if (PROCESSORS_ALLOWLIST_SETTING.exists(settings) == false) { + return Map.copyOf(map); + } + final Set allowlist = Set.copyOf(PROCESSORS_ALLOWLIST_SETTING.get(settings)); + // Assert that no unknown processors are defined in the allowlist + final Set unknownAllowlistProcessors = allowlist.stream() + .filter(p -> map.containsKey(p) == false) + .collect(Collectors.toUnmodifiableSet()); + if (unknownAllowlistProcessors.isEmpty() == false) { + throw new IllegalArgumentException( + "Processor(s) " + + unknownAllowlistProcessors + + " were defined in [" + + PROCESSORS_ALLOWLIST_SETTING.getKey() + + "] but do not exist" + ); + } + return map.entrySet() + .stream() + .filter(e -> allowlist.contains(e.getKey())) + .collect(Collectors.toUnmodifiableMap(Map.Entry::getKey, Map.Entry::getValue)); + } } diff --git a/modules/ingest-common/src/test/java/org/opensearch/ingest/common/IngestCommonModulePluginTests.java b/modules/ingest-common/src/test/java/org/opensearch/ingest/common/IngestCommonModulePluginTests.java new file mode 100644 index 0000000000000..b0c1e0fdbaa63 --- /dev/null +++ b/modules/ingest-common/src/test/java/org/opensearch/ingest/common/IngestCommonModulePluginTests.java @@ -0,0 +1,109 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.ingest.common; + +import org.opensearch.common.settings.Settings; +import org.opensearch.env.TestEnvironment; +import org.opensearch.ingest.Processor; +import org.opensearch.test.OpenSearchTestCase; + +import java.io.IOException; +import java.util.List; +import java.util.Set; + +public class IngestCommonModulePluginTests extends OpenSearchTestCase { + + public void testAllowlist() throws IOException { + runAllowlistTest(List.of()); + runAllowlistTest(List.of("date")); + runAllowlistTest(List.of("set")); + runAllowlistTest(List.of("copy", "date")); + runAllowlistTest(List.of("date", "set", "copy")); + } + + private void runAllowlistTest(List allowlist) throws IOException { + final Settings settings = Settings.builder() + .putList(IngestCommonModulePlugin.PROCESSORS_ALLOWLIST_SETTING.getKey(), allowlist) + .build(); + try (IngestCommonModulePlugin plugin = new IngestCommonModulePlugin()) { + assertEquals(Set.copyOf(allowlist), plugin.getProcessors(createParameters(settings)).keySet()); + } + } + + public void testAllowlistNotSpecified() throws IOException { + final Settings.Builder builder = Settings.builder(); + builder.remove(IngestCommonModulePlugin.PROCESSORS_ALLOWLIST_SETTING.getKey()); + final Settings settings = builder.build(); + try (IngestCommonModulePlugin plugin = new IngestCommonModulePlugin()) { + final Set expected = Set.of( + "append", + "urldecode", + "sort", + "fail", + "trim", + "set", + "fingerprint", + "pipeline", + "json", + "join", + "kv", + "bytes", + "date", + "drop", + "community_id", + "lowercase", + "convert", + "copy", + "gsub", + "dot_expander", + "rename", + "remove_by_pattern", + "html_strip", + "remove", + "csv", + "grok", + "date_index_name", + "foreach", + "script", + "dissect", + "uppercase", + "split" + ); + assertEquals(expected, plugin.getProcessors(createParameters(settings)).keySet()); + } + } + + public void testAllowlistHasNonexistentProcessors() throws IOException { + final Settings settings = Settings.builder() + .putList(IngestCommonModulePlugin.PROCESSORS_ALLOWLIST_SETTING.getKey(), List.of("threeve")) + .build(); + try (IngestCommonModulePlugin plugin = new IngestCommonModulePlugin()) { + IllegalArgumentException e = expectThrows( + IllegalArgumentException.class, + () -> plugin.getProcessors(createParameters(settings)) + ); + assertTrue(e.getMessage(), e.getMessage().contains("threeve")); + } + } + + private static Processor.Parameters createParameters(Settings settings) { + return new Processor.Parameters( + TestEnvironment.newEnvironment(Settings.builder().put(settings).put("path.home", "").build()), + null, + null, + null, + () -> 0L, + (a, b) -> null, + null, + null, + $ -> {}, + null + ); + } +} diff --git a/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-847316d.jar.sha1 b/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..83dd8e657bdd5 --- /dev/null +++ b/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +b866103bbaca4141c152deca9252bd137026dafc \ No newline at end of file diff --git a/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-c896995.jar.sha1 b/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 6d8d3be59f945..0000000000000 --- a/modules/lang-expression/licenses/lucene-expressions-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -9f0321cf2d34fca3f1f9334fdfee2b79d9d27444 \ No newline at end of file diff --git a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java index 5378a6721efb2..1574621a8200e 100644 --- a/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java +++ b/modules/search-pipeline-common/src/main/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePlugin.java @@ -8,24 +8,61 @@ package org.opensearch.search.pipeline.common; +import org.opensearch.common.settings.Setting; +import org.opensearch.common.settings.Settings; import org.opensearch.plugins.Plugin; import org.opensearch.plugins.SearchPipelinePlugin; import org.opensearch.search.pipeline.Processor; +import org.opensearch.search.pipeline.SearchPhaseResultsProcessor; import org.opensearch.search.pipeline.SearchRequestProcessor; import org.opensearch.search.pipeline.SearchResponseProcessor; +import java.util.List; import java.util.Map; +import java.util.Set; +import java.util.function.Function; +import java.util.stream.Collectors; /** * Plugin providing common search request/response processors for use in search pipelines. */ public class SearchPipelineCommonModulePlugin extends Plugin implements SearchPipelinePlugin { + static final Setting> REQUEST_PROCESSORS_ALLOWLIST_SETTING = Setting.listSetting( + "search.pipeline.common.request.processors.allowed", + List.of(), + Function.identity(), + Setting.Property.NodeScope + ); + + static final Setting> RESPONSE_PROCESSORS_ALLOWLIST_SETTING = Setting.listSetting( + "search.pipeline.common.response.processors.allowed", + List.of(), + Function.identity(), + Setting.Property.NodeScope + ); + + static final Setting> SEARCH_PHASE_RESULTS_PROCESSORS_ALLOWLIST_SETTING = Setting.listSetting( + "search.pipeline.common.search.phase.results.processors.allowed", + List.of(), + Function.identity(), + Setting.Property.NodeScope + ); + /** * No constructor needed, but build complains if we don't have a constructor with JavaDoc. */ public SearchPipelineCommonModulePlugin() {} + @Override + public List> getSettings() { + return List.of( + REQUEST_PROCESSORS_ALLOWLIST_SETTING, + RESPONSE_PROCESSORS_ALLOWLIST_SETTING, + SEARCH_PHASE_RESULTS_PROCESSORS_ALLOWLIST_SETTING + ); + } + /** * Returns a map of processor factories. * @@ -34,25 +71,62 @@ public SearchPipelineCommonModulePlugin() {} */ @Override public Map> getRequestProcessors(Parameters parameters) { - return Map.of( - FilterQueryRequestProcessor.TYPE, - new FilterQueryRequestProcessor.Factory(parameters.namedXContentRegistry), - ScriptRequestProcessor.TYPE, - new ScriptRequestProcessor.Factory(parameters.scriptService), - OversampleRequestProcessor.TYPE, - new OversampleRequestProcessor.Factory() + return filterForAllowlistSetting( + REQUEST_PROCESSORS_ALLOWLIST_SETTING, + parameters.env.settings(), + Map.of( + FilterQueryRequestProcessor.TYPE, + new FilterQueryRequestProcessor.Factory(parameters.namedXContentRegistry), + ScriptRequestProcessor.TYPE, + new ScriptRequestProcessor.Factory(parameters.scriptService), + OversampleRequestProcessor.TYPE, + new OversampleRequestProcessor.Factory() + ) ); } @Override public Map> getResponseProcessors(Parameters parameters) { - return Map.of( - RenameFieldResponseProcessor.TYPE, - new RenameFieldResponseProcessor.Factory(), - TruncateHitsResponseProcessor.TYPE, - new TruncateHitsResponseProcessor.Factory(), - CollapseResponseProcessor.TYPE, - new CollapseResponseProcessor.Factory() + return filterForAllowlistSetting( + RESPONSE_PROCESSORS_ALLOWLIST_SETTING, + parameters.env.settings(), + Map.of( + RenameFieldResponseProcessor.TYPE, + new RenameFieldResponseProcessor.Factory(), + TruncateHitsResponseProcessor.TYPE, + new TruncateHitsResponseProcessor.Factory(), + CollapseResponseProcessor.TYPE, + new CollapseResponseProcessor.Factory() + ) ); } + + @Override + public Map> getSearchPhaseResultsProcessors(Parameters parameters) { + return filterForAllowlistSetting(SEARCH_PHASE_RESULTS_PROCESSORS_ALLOWLIST_SETTING, parameters.env.settings(), Map.of()); + } + + private Map> filterForAllowlistSetting( + Setting> allowlistSetting, + Settings settings, + Map> map + ) { + if (allowlistSetting.exists(settings) == false) { + return Map.copyOf(map); + } + final Set allowlist = Set.copyOf(allowlistSetting.get(settings)); + // Assert that no unknown processors are defined in the allowlist + final Set unknownAllowlistProcessors = allowlist.stream() + .filter(p -> map.containsKey(p) == false) + .collect(Collectors.toUnmodifiableSet()); + if (unknownAllowlistProcessors.isEmpty() == false) { + throw new IllegalArgumentException( + "Processor(s) " + unknownAllowlistProcessors + " were defined in [" + allowlistSetting.getKey() + "] but do not exist" + ); + } + return map.entrySet() + .stream() + .filter(e -> allowlist.contains(e.getKey())) + .collect(Collectors.toUnmodifiableMap(Map.Entry::getKey, Map.Entry::getValue)); + } } diff --git a/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java new file mode 100644 index 0000000000000..519468ebe17ff --- /dev/null +++ b/modules/search-pipeline-common/src/test/java/org/opensearch/search/pipeline/common/SearchPipelineCommonModulePluginTests.java @@ -0,0 +1,106 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.search.pipeline.common; + +import org.opensearch.common.settings.Settings; +import org.opensearch.env.TestEnvironment; +import org.opensearch.plugins.SearchPipelinePlugin; +import org.opensearch.test.OpenSearchTestCase; + +import java.io.IOException; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.function.BiFunction; + +public class SearchPipelineCommonModulePluginTests extends OpenSearchTestCase { + + public void testRequestProcessorAllowlist() throws IOException { + final String key = SearchPipelineCommonModulePlugin.REQUEST_PROCESSORS_ALLOWLIST_SETTING.getKey(); + runAllowlistTest(key, List.of(), SearchPipelineCommonModulePlugin::getRequestProcessors); + runAllowlistTest(key, List.of("filter_query"), SearchPipelineCommonModulePlugin::getRequestProcessors); + runAllowlistTest(key, List.of("script"), SearchPipelineCommonModulePlugin::getRequestProcessors); + runAllowlistTest(key, List.of("oversample", "script"), SearchPipelineCommonModulePlugin::getRequestProcessors); + runAllowlistTest(key, List.of("filter_query", "script", "oversample"), SearchPipelineCommonModulePlugin::getRequestProcessors); + + final IllegalArgumentException e = expectThrows( + IllegalArgumentException.class, + () -> runAllowlistTest(key, List.of("foo"), SearchPipelineCommonModulePlugin::getRequestProcessors) + ); + assertTrue(e.getMessage(), e.getMessage().contains("foo")); + } + + public void testResponseProcessorAllowlist() throws IOException { + final String key = SearchPipelineCommonModulePlugin.RESPONSE_PROCESSORS_ALLOWLIST_SETTING.getKey(); + runAllowlistTest(key, List.of(), SearchPipelineCommonModulePlugin::getResponseProcessors); + runAllowlistTest(key, List.of("rename_field"), SearchPipelineCommonModulePlugin::getResponseProcessors); + runAllowlistTest(key, List.of("truncate_hits"), SearchPipelineCommonModulePlugin::getResponseProcessors); + runAllowlistTest(key, List.of("collapse", "truncate_hits"), SearchPipelineCommonModulePlugin::getResponseProcessors); + runAllowlistTest( + key, + List.of("rename_field", "truncate_hits", "collapse"), + SearchPipelineCommonModulePlugin::getResponseProcessors + ); + + final IllegalArgumentException e = expectThrows( + IllegalArgumentException.class, + () -> runAllowlistTest(key, List.of("foo"), SearchPipelineCommonModulePlugin::getResponseProcessors) + ); + assertTrue(e.getMessage(), e.getMessage().contains("foo")); + } + + public void testSearchPhaseResultsProcessorAllowlist() throws IOException { + final String key = SearchPipelineCommonModulePlugin.SEARCH_PHASE_RESULTS_PROCESSORS_ALLOWLIST_SETTING.getKey(); + runAllowlistTest(key, List.of(), SearchPipelineCommonModulePlugin::getSearchPhaseResultsProcessors); + + final IllegalArgumentException e = expectThrows( + IllegalArgumentException.class, + () -> runAllowlistTest(key, List.of("foo"), SearchPipelineCommonModulePlugin::getSearchPhaseResultsProcessors) + ); + assertTrue(e.getMessage(), e.getMessage().contains("foo")); + } + + private void runAllowlistTest( + String settingKey, + List allowlist, + BiFunction> function + ) throws IOException { + final Settings settings = Settings.builder().putList(settingKey, allowlist).build(); + try (SearchPipelineCommonModulePlugin plugin = new SearchPipelineCommonModulePlugin()) { + assertEquals(Set.copyOf(allowlist), function.apply(plugin, createParameters(settings)).keySet()); + } + } + + public void testAllowlistNotSpecified() throws IOException { + final Settings settings = Settings.EMPTY; + try (SearchPipelineCommonModulePlugin plugin = new SearchPipelineCommonModulePlugin()) { + assertEquals(Set.of("oversample", "filter_query", "script"), plugin.getRequestProcessors(createParameters(settings)).keySet()); + assertEquals( + Set.of("rename_field", "truncate_hits", "collapse"), + plugin.getResponseProcessors(createParameters(settings)).keySet() + ); + assertEquals(Set.of(), plugin.getSearchPhaseResultsProcessors(createParameters(settings)).keySet()); + } + } + + private static SearchPipelinePlugin.Parameters createParameters(Settings settings) { + return new SearchPipelinePlugin.Parameters( + TestEnvironment.newEnvironment(Settings.builder().put(settings).put("path.home", "").build()), + null, + null, + null, + () -> 0L, + (a, b) -> null, + null, + null, + $ -> {}, + null + ); + } +} diff --git a/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..80e254ed3d098 --- /dev/null +++ b/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +04436942995a4952ce5654126dfb767d6335674e \ No newline at end of file diff --git a/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 696803bf63b46..0000000000000 --- a/plugins/analysis-icu/licenses/lucene-analysis-icu-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -e6314f36fb29e208d58c0470f14269c9c36996ba \ No newline at end of file diff --git a/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..3baed2a6e660b --- /dev/null +++ b/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +85918e24fc3bf63fcd953807ab2eb3fa55c987c2 \ No newline at end of file diff --git a/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 7a12077d7fc62..0000000000000 --- a/plugins/analysis-kuromoji/licenses/lucene-analysis-kuromoji-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -77fbf1e37af79715f28f66d8cc5b50af2982fc54 \ No newline at end of file diff --git a/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..4e9327112d412 --- /dev/null +++ b/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +15e425e9cc0ab9d65fac3c919199a24dfa3631eb \ No newline at end of file diff --git a/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index efed62c7e5e5b..0000000000000 --- a/plugins/analysis-nori/licenses/lucene-analysis-nori-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a7a4e9c6004c72782e1002e1dcfaf4fbab7887d8 \ No newline at end of file diff --git a/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..7e7e9fe5b22b4 --- /dev/null +++ b/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +3d16c18348e7d4a00cb83100c43f3e21239d224e \ No newline at end of file diff --git a/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index f2020abcb8ef7..0000000000000 --- a/plugins/analysis-phonetic/licenses/lucene-analysis-phonetic-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -42ac148a3769d6eb880d7f184d1917bad48ca303 \ No newline at end of file diff --git a/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..98e0ecc9cbb89 --- /dev/null +++ b/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +2ef6d9dffc6816d3cd04a54fe1ee43e13f850a37 \ No newline at end of file diff --git a/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index b64e4061311e5..0000000000000 --- a/plugins/analysis-smartcn/licenses/lucene-analysis-smartcn-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -adf2a25339ac8722647f8196288c1f5056bbf0de \ No newline at end of file diff --git a/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..ef675f2b9702e --- /dev/null +++ b/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +e72b2262f5393d9ff255fb901297d4e6790e9102 \ No newline at end of file diff --git a/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index f56e7fc5df766..0000000000000 --- a/plugins/analysis-stempel/licenses/lucene-analysis-stempel-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a689e3af2015b21b7b4f41a1206b50c44519b6f7 \ No newline at end of file diff --git a/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-847316d.jar.sha1 b/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..d8bbac27fd360 --- /dev/null +++ b/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +416ac44b2e76592c9e85338798cae93c3cf5475e \ No newline at end of file diff --git a/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-c896995.jar.sha1 b/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 30732e3c4a688..0000000000000 --- a/plugins/analysis-ukrainian/licenses/lucene-analysis-morfologik-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -c875f7706ee81b1fb0b3443767a8c9c52f30abc5 \ No newline at end of file diff --git a/plugins/repository-azure/build.gradle b/plugins/repository-azure/build.gradle index f3aa64316b667..13b711019ff2a 100644 --- a/plugins/repository-azure/build.gradle +++ b/plugins/repository-azure/build.gradle @@ -47,7 +47,7 @@ dependencies { api 'com.azure:azure-core:1.49.1' api 'com.azure:azure-json:1.1.0' api 'com.azure:azure-xml:1.0.0' - api 'com.azure:azure-storage-common:12.21.2' + api 'com.azure:azure-storage-common:12.25.1' api 'com.azure:azure-core-http-netty:1.15.1' api "io.netty:netty-codec-dns:${versions.netty}" api "io.netty:netty-codec-socks:${versions.netty}" @@ -61,7 +61,7 @@ dependencies { // Start of transitive dependencies for azure-identity api 'com.microsoft.azure:msal4j-persistence-extension:1.3.0' api "net.java.dev.jna:jna-platform:${versions.jna}" - api 'com.microsoft.azure:msal4j:1.15.1' + api 'com.microsoft.azure:msal4j:1.16.0' api 'com.nimbusds:oauth2-oidc-sdk:11.9.1' api 'com.nimbusds:nimbus-jose-jwt:9.40' api 'com.nimbusds:content-type:2.3' diff --git a/plugins/repository-azure/licenses/azure-storage-common-12.21.2.jar.sha1 b/plugins/repository-azure/licenses/azure-storage-common-12.21.2.jar.sha1 deleted file mode 100644 index b3c73774764df..0000000000000 --- a/plugins/repository-azure/licenses/azure-storage-common-12.21.2.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -d2676d4fc40a501bd5d0437b8d2bfb9926022bea \ No newline at end of file diff --git a/plugins/repository-azure/licenses/azure-storage-common-12.25.1.jar.sha1 b/plugins/repository-azure/licenses/azure-storage-common-12.25.1.jar.sha1 new file mode 100644 index 0000000000000..822a60d81ca27 --- /dev/null +++ b/plugins/repository-azure/licenses/azure-storage-common-12.25.1.jar.sha1 @@ -0,0 +1 @@ +96e2df76ce9a8fa084ae289bb59295d565f2b8d5 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/msal4j-1.15.1.jar.sha1 b/plugins/repository-azure/licenses/msal4j-1.15.1.jar.sha1 deleted file mode 100644 index 797f21d0d4995..0000000000000 --- a/plugins/repository-azure/licenses/msal4j-1.15.1.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -cd1daa94b81bd97153536b661c31295f99cbb8e7 \ No newline at end of file diff --git a/plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 b/plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 new file mode 100644 index 0000000000000..29fe5022a1570 --- /dev/null +++ b/plugins/repository-azure/licenses/msal4j-1.16.0.jar.sha1 @@ -0,0 +1 @@ +708a0a986ed091054f1c08866712e5b41aec6700 \ No newline at end of file diff --git a/plugins/repository-azure/src/main/java/org/opensearch/repositories/azure/AzureStorageService.java b/plugins/repository-azure/src/main/java/org/opensearch/repositories/azure/AzureStorageService.java index f39ed185d8b35..4f30247f0af08 100644 --- a/plugins/repository-azure/src/main/java/org/opensearch/repositories/azure/AzureStorageService.java +++ b/plugins/repository-azure/src/main/java/org/opensearch/repositories/azure/AzureStorageService.java @@ -141,6 +141,9 @@ public Void run() { // - https://github.com/Azure/azure-sdk-for-java/pull/25004 // - https://github.com/Azure/azure-sdk-for-java/pull/24374 Configuration.getGlobalConfiguration().put("AZURE_JACKSON_ADAPTER_USE_ACCESS_HELPER", "true"); + // See please: + // - https://github.com/Azure/azure-sdk-for-java/issues/37464 + Configuration.getGlobalConfiguration().put("AZURE_ENABLE_SHUTDOWN_HOOK_WITH_PRIVILEGE", "true"); } public AzureStorageService(Settings settings) { diff --git a/plugins/repository-azure/src/main/plugin-metadata/plugin-security.policy b/plugins/repository-azure/src/main/plugin-metadata/plugin-security.policy index e8fbe35ebab1d..eedcfd98da150 100644 --- a/plugins/repository-azure/src/main/plugin-metadata/plugin-security.policy +++ b/plugins/repository-azure/src/main/plugin-metadata/plugin-security.policy @@ -38,6 +38,7 @@ grant { permission java.lang.RuntimePermission "accessDeclaredMembers"; permission java.lang.reflect.ReflectPermission "suppressAccessChecks"; permission java.lang.RuntimePermission "setContextClassLoader"; + permission java.lang.RuntimePermission "shutdownHooks"; // azure client set Authenticator for proxy username/password permission java.net.NetPermission "setDefaultAuthenticator"; diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/10_histogram.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/10_histogram.yml index 996c2aae8cfe4..a75b1d0eac793 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/10_histogram.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/10_histogram.yml @@ -678,6 +678,11 @@ setup: - '{"index": {}}' - '{"date": "2016-03-01"}' + - do: + indices.forcemerge: + index: test_2 + max_num_segments: 1 + - do: search: index: test_2 diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml index 78e2e6858c6ff..ade9eb3eee0dc 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/230_composite.yml @@ -1101,6 +1101,12 @@ setup: - '{"date": "2016-02-01"}' - '{"index": {}}' - '{"date": "2016-03-01"}' + + - do: + indices.forcemerge: + index: test_2 + max_num_segments: 1 + - do: search: index: test_2 diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/330_auto_date_histogram.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/330_auto_date_histogram.yml index fc82517788c91..0897e0bdd894b 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/330_auto_date_histogram.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/330_auto_date_histogram.yml @@ -133,6 +133,11 @@ setup: - '{"index": {}}' - '{"date": "2020-03-09", "v": 4}' + - do: + indices.forcemerge: + index: test_profile + max_num_segments: 1 + - do: search: index: test_profile diff --git a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/40_range.yml b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/40_range.yml index 2fd926276d0b4..80aad96ce1f6b 100644 --- a/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/40_range.yml +++ b/rest-api-spec/src/main/resources/rest-api-spec/test/search.aggregation/40_range.yml @@ -544,6 +544,7 @@ setup: body: settings: number_of_replicas: 0 + number_of_shards: 1 refresh_interval: -1 mappings: properties: @@ -567,6 +568,11 @@ setup: - '{"index": {}}' - '{"double" : 50}' + - do: + indices.forcemerge: + index: test_profile + max_num_segments: 1 + - do: search: index: test_profile diff --git a/server/build.gradle b/server/build.gradle index b8a99facbf964..429af5d0ac258 100644 --- a/server/build.gradle +++ b/server/build.gradle @@ -409,6 +409,7 @@ tasks.register("japicmp", me.champeau.gradle.japicmp.JapicmpTask) { failOnModification = true ignoreMissingClasses = true annotationIncludes = ['@org.opensearch.common.annotation.PublicApi', '@org.opensearch.common.annotation.DeprecatedApi'] + annotationExcludes = ['@org.opensearch.common.annotation.InternalApi'] txtOutputFile = layout.buildDirectory.file("reports/java-compatibility/report.txt") htmlOutputFile = layout.buildDirectory.file("reports/java-compatibility/report.html") dependsOn downloadJapicmpCompareTarget diff --git a/server/licenses/lucene-analysis-common-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-analysis-common-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..f1249066d10f2 --- /dev/null +++ b/server/licenses/lucene-analysis-common-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +7e282aab7388efc911348f1eacd90e661580dda7 \ No newline at end of file diff --git a/server/licenses/lucene-analysis-common-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-analysis-common-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 4b545e061c52f..0000000000000 --- a/server/licenses/lucene-analysis-common-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -73696492c6e59972974cd91e03ad9464e6b5bfcd \ No newline at end of file diff --git a/server/licenses/lucene-backward-codecs-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-backward-codecs-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..ac50c5e110a72 --- /dev/null +++ b/server/licenses/lucene-backward-codecs-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +69e59ba4bed4c58836d2727d72b7f0095d2dcb92 \ No newline at end of file diff --git a/server/licenses/lucene-backward-codecs-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-backward-codecs-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index ae4ffb2b1800b..0000000000000 --- a/server/licenses/lucene-backward-codecs-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -3cbb29ecc873e8c880a6f32e739655551708dbcf \ No newline at end of file diff --git a/server/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..e3fd1708ea428 --- /dev/null +++ b/server/licenses/lucene-core-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +51ff4940eb1024184bbaa5dae39695d2392c5bab \ No newline at end of file diff --git a/server/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 299283562fddc..0000000000000 --- a/server/licenses/lucene-core-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -826b328c37ea7f27c05d685db03bf8d2b00457ff \ No newline at end of file diff --git a/server/licenses/lucene-grouping-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-grouping-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..cc5bf5bfd8ec0 --- /dev/null +++ b/server/licenses/lucene-grouping-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +5847a7d47f13ecb7f039fb9adf6f3b8e4bddde77 \ No newline at end of file diff --git a/server/licenses/lucene-grouping-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-grouping-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index b0268c98167d3..0000000000000 --- a/server/licenses/lucene-grouping-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a3a7003dc83197523e830f058a3748dbea96cab7 \ No newline at end of file diff --git a/server/licenses/lucene-highlighter-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-highlighter-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..eb14059d2cd8c --- /dev/null +++ b/server/licenses/lucene-highlighter-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +7cc0a26777a479f06fbcfae7abc23e784e1a00dc \ No newline at end of file diff --git a/server/licenses/lucene-highlighter-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-highlighter-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index d87927364b5a8..0000000000000 --- a/server/licenses/lucene-highlighter-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -00eb386915c3cffa9efcef2dc4c406f8a6776afe \ No newline at end of file diff --git a/server/licenses/lucene-join-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-join-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..b87170c39c78c --- /dev/null +++ b/server/licenses/lucene-join-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +9cd99401c826d910da3c2beab8e42f1af8be6ea4 \ No newline at end of file diff --git a/server/licenses/lucene-join-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-join-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 25a95546ab544..0000000000000 --- a/server/licenses/lucene-join-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -bb1fc572da7d473bf39672fd8ac323b15a1ffff0 \ No newline at end of file diff --git a/server/licenses/lucene-memory-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-memory-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..de591dd659cb5 --- /dev/null +++ b/server/licenses/lucene-memory-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +cfee136ecbc3df7adc38b38e020dca5e61c22773 \ No newline at end of file diff --git a/server/licenses/lucene-memory-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-memory-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index a0b3fd812561c..0000000000000 --- a/server/licenses/lucene-memory-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -05ebfcef0435f4870859a19c93020e24398bb939 \ No newline at end of file diff --git a/server/licenses/lucene-misc-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-misc-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..1a999bb9c6686 --- /dev/null +++ b/server/licenses/lucene-misc-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +afbc5adf93d4eb1a1b109ad828d1968bf16ef292 \ No newline at end of file diff --git a/server/licenses/lucene-misc-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-misc-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 1e2cc97c37257..0000000000000 --- a/server/licenses/lucene-misc-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -d5747ed1be242b59aa36b0c32b0d3bd26b1d8fb8 \ No newline at end of file diff --git a/server/licenses/lucene-queries-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-queries-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..783a26551ae8c --- /dev/null +++ b/server/licenses/lucene-queries-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +16907c36f6adb8ba8f260e05738c66afb37c72d3 \ No newline at end of file diff --git a/server/licenses/lucene-queries-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-queries-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 31d4fe2886fc1..0000000000000 --- a/server/licenses/lucene-queries-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -fb6678d7fe035e55c545450682b67be49457ef1b \ No newline at end of file diff --git a/server/licenses/lucene-queryparser-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-queryparser-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..b3e9e4de96174 --- /dev/null +++ b/server/licenses/lucene-queryparser-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +72baa9bddcf2efb71ffb695f1e9f548699ec13a0 \ No newline at end of file diff --git a/server/licenses/lucene-queryparser-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-queryparser-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 754e4ea20765f..0000000000000 --- a/server/licenses/lucene-queryparser-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -a11d7f56a9e78dc8e61f85b9b54ad94d73583bb3 \ No newline at end of file diff --git a/server/licenses/lucene-sandbox-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-sandbox-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..2aefa435b1e9a --- /dev/null +++ b/server/licenses/lucene-sandbox-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +dd3c63066f583d90b563ebaa6fbe61c603403acb \ No newline at end of file diff --git a/server/licenses/lucene-sandbox-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-sandbox-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 08c2bc48ae85b..0000000000000 --- a/server/licenses/lucene-sandbox-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -75352855bcc052abfba821f878a27fd2b328fb1c \ No newline at end of file diff --git a/server/licenses/lucene-spatial-extras-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-spatial-extras-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..d27112c6db6ab --- /dev/null +++ b/server/licenses/lucene-spatial-extras-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +69b99530e0b05251c12863bee6a9325cafd5fdaa \ No newline at end of file diff --git a/server/licenses/lucene-spatial-extras-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-spatial-extras-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 5e0b7196f48c2..0000000000000 --- a/server/licenses/lucene-spatial-extras-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -299be103216d67ca092bef177642b275224e77a6 \ No newline at end of file diff --git a/server/licenses/lucene-spatial3d-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-spatial3d-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..29423ac0ababd --- /dev/null +++ b/server/licenses/lucene-spatial3d-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +a67d193b4b08790169db7cf005a2429991260287 \ No newline at end of file diff --git a/server/licenses/lucene-spatial3d-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-spatial3d-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index c79b34adea5e2..0000000000000 --- a/server/licenses/lucene-spatial3d-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -29b4a76cd0bdabe0e067063831e661dedac6e503 \ No newline at end of file diff --git a/server/licenses/lucene-suggest-9.12.0-snapshot-847316d.jar.sha1 b/server/licenses/lucene-suggest-9.12.0-snapshot-847316d.jar.sha1 new file mode 100644 index 0000000000000..6ce1f639ccbb7 --- /dev/null +++ b/server/licenses/lucene-suggest-9.12.0-snapshot-847316d.jar.sha1 @@ -0,0 +1 @@ +7a1625ae39071ccbfb3af11df5a74291758f4b47 \ No newline at end of file diff --git a/server/licenses/lucene-suggest-9.12.0-snapshot-c896995.jar.sha1 b/server/licenses/lucene-suggest-9.12.0-snapshot-c896995.jar.sha1 deleted file mode 100644 index 8d5334f0c4619..0000000000000 --- a/server/licenses/lucene-suggest-9.12.0-snapshot-c896995.jar.sha1 +++ /dev/null @@ -1 +0,0 @@ -597edb659e9ea93398a816e6837da7d47ef53873 \ No newline at end of file diff --git a/server/src/internalClusterTest/java/org/opensearch/cluster/allocation/ClusterRerouteIT.java b/server/src/internalClusterTest/java/org/opensearch/cluster/allocation/ClusterRerouteIT.java index dbcb030d8a4f7..f4b5f112f5785 100644 --- a/server/src/internalClusterTest/java/org/opensearch/cluster/allocation/ClusterRerouteIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/cluster/allocation/ClusterRerouteIT.java @@ -273,7 +273,8 @@ public void testDelayWithALargeAmountOfShards() throws Exception { internalCluster().stopRandomNode(InternalTestCluster.nameFilter(node_1)); // This might run slowly on older hardware - ensureGreen(TimeValue.timeValueMinutes(2)); + // In some case, the shards will be rebalanced back and forth, it seems like a very low probability bug. + ensureGreen(TimeValue.timeValueMinutes(2), false); } private void rerouteWithAllocateLocalGateway(Settings commonSettings) throws Exception { diff --git a/server/src/internalClusterTest/java/org/opensearch/index/mapper/StarTreeMapperIT.java b/server/src/internalClusterTest/java/org/opensearch/index/mapper/StarTreeMapperIT.java new file mode 100644 index 0000000000000..8e5193b650868 --- /dev/null +++ b/server/src/internalClusterTest/java/org/opensearch/index/mapper/StarTreeMapperIT.java @@ -0,0 +1,440 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.opensearch.action.support.master.AcknowledgedResponse; +import org.opensearch.common.Rounding; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.core.index.Index; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.IndexService; +import org.opensearch.index.compositeindex.CompositeIndexSettings; +import org.opensearch.index.compositeindex.datacube.DateDimension; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; +import org.opensearch.indices.IndicesService; +import org.opensearch.test.OpenSearchIntegTestCase; +import org.junit.After; +import org.junit.Before; + +import java.io.IOException; +import java.util.Arrays; +import java.util.Collections; +import java.util.List; +import java.util.Set; + +import static org.opensearch.common.xcontent.XContentFactory.jsonBuilder; +import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; + +/** + * Integration tests for star tree mapper + */ +public class StarTreeMapperIT extends OpenSearchIntegTestCase { + private static final String TEST_INDEX = "test"; + + private static XContentBuilder createMinimalTestMapping(boolean invalidDim, boolean invalidMetric, boolean keywordDim) { + try { + return jsonBuilder().startObject() + .startObject("composite") + .startObject("startree-1") + .field("type", "star_tree") + .startObject("config") + .startArray("ordered_dimensions") + .startObject() + .field("name", "timestamp") + .endObject() + .startObject() + .field("name", getDim(invalidDim, keywordDim)) + .endObject() + .endArray() + .startArray("metrics") + .startObject() + .field("name", getDim(invalidMetric, false)) + .endObject() + .endArray() + .endObject() + .endObject() + .endObject() + .startObject("properties") + .startObject("timestamp") + .field("type", "date") + .endObject() + .startObject("numeric_dv") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("numeric") + .field("type", "integer") + .field("doc_values", false) + .endObject() + .startObject("keyword_dv") + .field("type", "keyword") + .field("doc_values", true) + .endObject() + .startObject("keyword") + .field("type", "keyword") + .field("doc_values", false) + .endObject() + .endObject() + .endObject(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + private static XContentBuilder createMaxDimTestMapping() { + try { + return jsonBuilder().startObject() + .startObject("composite") + .startObject("startree-1") + .field("type", "star_tree") + .startObject("config") + .startArray("ordered_dimensions") + .startObject() + .field("name", "timestamp") + .startArray("calendar_intervals") + .value("day") + .value("month") + .endArray() + .endObject() + .startObject() + .field("name", "dim2") + .endObject() + .startObject() + .field("name", "dim3") + .endObject() + .endArray() + .startArray("metrics") + .startObject() + .field("name", "dim2") + .endObject() + .endArray() + .endObject() + .endObject() + .endObject() + .startObject("properties") + .startObject("timestamp") + .field("type", "date") + .endObject() + .startObject("dim2") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("dim3") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .endObject() + .endObject(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + private static XContentBuilder createTestMappingWithoutStarTree(boolean invalidDim, boolean invalidMetric, boolean keywordDim) { + try { + return jsonBuilder().startObject() + .startObject("properties") + .startObject("timestamp") + .field("type", "date") + .endObject() + .startObject("numeric_dv") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("numeric") + .field("type", "integer") + .field("doc_values", false) + .endObject() + .startObject("keyword_dv") + .field("type", "keyword") + .field("doc_values", true) + .endObject() + .startObject("keyword") + .field("type", "keyword") + .field("doc_values", false) + .endObject() + .endObject() + .endObject(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + private static XContentBuilder createUpdateTestMapping(boolean changeDim, boolean sameStarTree) { + try { + return jsonBuilder().startObject() + .startObject("composite") + .startObject(sameStarTree ? "startree-1" : "startree-2") + .field("type", "star_tree") + .startObject("config") + .startArray("ordered_dimensions") + .startObject() + .field("name", "timestamp") + .endObject() + .startObject() + .field("name", changeDim ? "numeric_new" : getDim(false, false)) + .endObject() + .endArray() + .startArray("metrics") + .startObject() + .field("name", getDim(false, false)) + .endObject() + .endArray() + .endObject() + .endObject() + .endObject() + .startObject("properties") + .startObject("timestamp") + .field("type", "date") + .endObject() + .startObject("numeric_dv") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("numeric") + .field("type", "integer") + .field("doc_values", false) + .endObject() + .startObject("numeric_new") + .field("type", "integer") + .field("doc_values", true) + .endObject() + .startObject("keyword_dv") + .field("type", "keyword") + .field("doc_values", true) + .endObject() + .startObject("keyword") + .field("type", "keyword") + .field("doc_values", false) + .endObject() + .endObject() + .endObject(); + } catch (IOException e) { + throw new IllegalStateException(e); + } + } + + private static String getDim(boolean hasDocValues, boolean isKeyword) { + if (hasDocValues) { + return "numeric"; + } else if (isKeyword) { + return "keyword"; + } + return "numeric_dv"; + } + + @Override + protected Settings featureFlagSettings() { + return Settings.builder().put(super.featureFlagSettings()).put(FeatureFlags.STAR_TREE_INDEX, "true").build(); + } + + @Before + public final void setupNodeSettings() { + Settings request = Settings.builder().put(CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING.getKey(), true).build(); + assertAcked(client().admin().cluster().prepareUpdateSettings().setPersistentSettings(request).get()); + } + + public void testValidCompositeIndex() { + prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, false)).get(); + Iterable dataNodeInstances = internalCluster().getDataNodeInstances(IndicesService.class); + for (IndicesService service : dataNodeInstances) { + final Index index = resolveIndex("test"); + if (service.hasIndex(index)) { + IndexService indexService = service.indexService(index); + Set fts = indexService.mapperService().getCompositeFieldTypes(); + + for (CompositeMappedFieldType ft : fts) { + assertTrue(ft instanceof StarTreeMapper.StarTreeFieldType); + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) ft; + assertEquals("timestamp", starTreeFieldType.getDimensions().get(0).getField()); + assertTrue(starTreeFieldType.getDimensions().get(0) instanceof DateDimension); + DateDimension dateDim = (DateDimension) starTreeFieldType.getDimensions().get(0); + List expectedTimeUnits = Arrays.asList( + Rounding.DateTimeUnit.MINUTES_OF_HOUR, + Rounding.DateTimeUnit.HOUR_OF_DAY + ); + assertEquals(expectedTimeUnits, dateDim.getIntervals()); + assertEquals("numeric_dv", starTreeFieldType.getDimensions().get(1).getField()); + assertEquals("numeric_dv", starTreeFieldType.getMetrics().get(0).getField()); + List expectedMetrics = Arrays.asList( + MetricStat.AVG, + MetricStat.COUNT, + MetricStat.SUM, + MetricStat.MAX, + MetricStat.MIN + ); + assertEquals(expectedMetrics, starTreeFieldType.getMetrics().get(0).getMetrics()); + assertEquals(10000, starTreeFieldType.getStarTreeConfig().maxLeafDocs()); + assertEquals( + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP, + starTreeFieldType.getStarTreeConfig().getBuildMode() + ); + assertEquals(Collections.emptySet(), starTreeFieldType.getStarTreeConfig().getSkipStarNodeCreationInDims()); + } + } + } + } + + public void testUpdateIndexWithAdditionOfStarTree() { + prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, false)).get(); + + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> client().admin().indices().preparePutMapping(TEST_INDEX).setSource(createUpdateTestMapping(false, false)).get() + ); + assertEquals("Index cannot have more than [1] star tree fields", ex.getMessage()); + } + + public void testUpdateIndexWithNewerStarTree() { + prepareCreate(TEST_INDEX).setMapping(createTestMappingWithoutStarTree(false, false, false)).get(); + + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> client().admin().indices().preparePutMapping(TEST_INDEX).setSource(createUpdateTestMapping(false, false)).get() + ); + assertEquals( + "Composite fields must be specified during index creation, addition of new composite fields during update is not supported", + ex.getMessage() + ); + } + + public void testUpdateIndexWhenMappingIsDifferent() { + prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, false)).get(); + + // update some field in the mapping + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> client().admin().indices().preparePutMapping(TEST_INDEX).setSource(createUpdateTestMapping(true, true)).get() + ); + assertTrue(ex.getMessage().contains("Cannot update parameter [config] from")); + } + + public void testUpdateIndexWhenMappingIsSame() { + prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, false)).get(); + + // update some field in the mapping + AcknowledgedResponse putMappingResponse = client().admin() + .indices() + .preparePutMapping(TEST_INDEX) + .setSource(createMinimalTestMapping(false, false, false)) + .get(); + assertAcked(putMappingResponse); + + Iterable dataNodeInstances = internalCluster().getDataNodeInstances(IndicesService.class); + for (IndicesService service : dataNodeInstances) { + final Index index = resolveIndex("test"); + if (service.hasIndex(index)) { + IndexService indexService = service.indexService(index); + Set fts = indexService.mapperService().getCompositeFieldTypes(); + + for (CompositeMappedFieldType ft : fts) { + assertTrue(ft instanceof StarTreeMapper.StarTreeFieldType); + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) ft; + assertEquals("timestamp", starTreeFieldType.getDimensions().get(0).getField()); + assertTrue(starTreeFieldType.getDimensions().get(0) instanceof DateDimension); + DateDimension dateDim = (DateDimension) starTreeFieldType.getDimensions().get(0); + List expectedTimeUnits = Arrays.asList( + Rounding.DateTimeUnit.MINUTES_OF_HOUR, + Rounding.DateTimeUnit.HOUR_OF_DAY + ); + assertEquals(expectedTimeUnits, dateDim.getIntervals()); + assertEquals("numeric_dv", starTreeFieldType.getDimensions().get(1).getField()); + assertEquals("numeric_dv", starTreeFieldType.getMetrics().get(0).getField()); + List expectedMetrics = Arrays.asList( + MetricStat.AVG, + MetricStat.COUNT, + MetricStat.SUM, + MetricStat.MAX, + MetricStat.MIN + ); + assertEquals(expectedMetrics, starTreeFieldType.getMetrics().get(0).getMetrics()); + assertEquals(10000, starTreeFieldType.getStarTreeConfig().maxLeafDocs()); + assertEquals( + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP, + starTreeFieldType.getStarTreeConfig().getBuildMode() + ); + assertEquals(Collections.emptySet(), starTreeFieldType.getStarTreeConfig().getSkipStarNodeCreationInDims()); + } + } + } + } + + public void testInvalidDimCompositeIndex() { + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(true, false, false)).get() + ); + assertEquals( + "Aggregations not supported for the dimension field [numeric] with field type [integer] as part of star tree field", + ex.getMessage() + ); + } + + public void testMaxDimsCompositeIndex() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMaxDimTestMapping()) + .setSettings(Settings.builder().put(StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_SETTING.getKey(), 2)) + .get() + ); + assertEquals( + "Failed to parse mapping [_doc]: ordered_dimensions cannot have more than 2 dimensions for star tree field [startree-1]", + ex.getMessage() + ); + } + + public void testMaxCalendarIntervalsCompositeIndex() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMaxDimTestMapping()) + .setSettings(Settings.builder().put(StarTreeIndexSettings.STAR_TREE_MAX_DATE_INTERVALS_SETTING.getKey(), 1)) + .get() + ); + assertEquals( + "Failed to parse mapping [_doc]: At most [1] calendar intervals are allowed in dimension [timestamp]", + ex.getMessage() + ); + } + + public void testUnsupportedDim() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, false, true)).get() + ); + assertEquals( + "Failed to parse mapping [_doc]: unsupported field type associated with dimension [keyword] as part of star tree field [startree-1]", + ex.getMessage() + ); + } + + public void testInvalidMetric() { + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> prepareCreate(TEST_INDEX).setMapping(createMinimalTestMapping(false, true, false)).get() + ); + assertEquals( + "Aggregations not supported for the metrics field [numeric] with field type [integer] as part of star tree field", + ex.getMessage() + ); + } + + @After + public final void cleanupNodeSettings() { + assertAcked( + client().admin() + .cluster() + .prepareUpdateSettings() + .setPersistentSettings(Settings.builder().putNull("*")) + .setTransientSettings(Settings.builder().putNull("*")) + ); + } +} diff --git a/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java b/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java index 6885d37c4aab0..216c104dfecc1 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotemigration/RemoteMigrationIndexMetadataUpdateIT.java @@ -275,7 +275,6 @@ initalMetadataVersion < internalCluster().client() * After shard relocation completes, shuts down the docrep nodes and asserts remote * index settings are applied even when the index is in YELLOW state */ - @AwaitsFix(bugUrl = "https://github.com/opensearch-project/OpenSearch/issues/13737") public void testIndexSettingsUpdatedEvenForMisconfiguredReplicas() throws Exception { internalCluster().startClusterManagerOnlyNode(); @@ -332,7 +331,6 @@ public void testIndexSettingsUpdatedEvenForMisconfiguredReplicas() throws Except * After shard relocation completes, restarts the docrep node holding extra replica shard copy * and asserts remote index settings are applied as soon as the docrep replica copy is unassigned */ - @AwaitsFix(bugUrl = "https://github.com/opensearch-project/OpenSearch/issues/13871") public void testIndexSettingsUpdatedWhenDocrepNodeIsRestarted() throws Exception { internalCluster().startClusterManagerOnlyNode(); diff --git a/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreIT.java b/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreIT.java index 96d6338e5913b..194dce5f4a57a 100644 --- a/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/remotestore/RemoteStoreIT.java @@ -65,7 +65,6 @@ import static org.opensearch.index.remote.RemoteStoreEnums.DataType.METADATA; import static org.opensearch.index.shard.IndexShardTestCase.getTranslog; import static org.opensearch.indices.RemoteStoreSettings.CLUSTER_REMOTE_TRANSLOG_BUFFER_INTERVAL_SETTING; -import static org.opensearch.test.OpenSearchTestCase.getShardLevelBlobPath; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertHitCount; import static org.hamcrest.Matchers.comparesEqualTo; @@ -133,6 +132,21 @@ private void testPeerRecovery(int numberOfIterations, boolean invokeFlush) throw ); } + public void testRemoteStoreIndexCreationAndDeletionWithReferencedStore() throws InterruptedException, ExecutionException { + String dataNode = internalCluster().startNodes(1).get(0); + createIndex(INDEX_NAME, remoteStoreIndexSettings(0)); + ensureYellowAndNoInitializingShards(INDEX_NAME); + ensureGreen(INDEX_NAME); + + IndexShard indexShard = getIndexShard(dataNode, INDEX_NAME); + + // Simulating a condition where store is already in use by increasing ref count, this helps in testing index + // deletion when refresh is in-progress. + indexShard.store().incRef(); + assertAcked(client().admin().indices().prepareDelete(INDEX_NAME)); + indexShard.store().decRef(); + } + public void testPeerRecoveryWithRemoteStoreAndRemoteTranslogNoDataFlush() throws Exception { testPeerRecovery(1, true); } diff --git a/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java b/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java index 2440a3c64e956..1c199df4d548e 100644 --- a/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java +++ b/server/src/internalClusterTest/java/org/opensearch/snapshots/SearchableSnapshotIT.java @@ -28,6 +28,7 @@ import org.opensearch.cluster.block.ClusterBlockException; import org.opensearch.cluster.metadata.IndexMetadata; import org.opensearch.cluster.node.DiscoveryNode; +import org.opensearch.cluster.node.DiscoveryNodeRole; import org.opensearch.cluster.routing.GroupShardsIterator; import org.opensearch.cluster.routing.ShardIterator; import org.opensearch.cluster.routing.ShardRouting; @@ -35,6 +36,7 @@ import org.opensearch.common.Priority; import org.opensearch.common.io.PathUtils; import org.opensearch.common.settings.Settings; +import org.opensearch.common.settings.SettingsException; import org.opensearch.common.unit.TimeValue; import org.opensearch.core.common.unit.ByteSizeUnit; import org.opensearch.core.index.Index; @@ -65,10 +67,13 @@ import java.util.stream.StreamSupport; import static org.opensearch.action.admin.cluster.node.stats.NodesStatsRequest.Metric.FS; +import static org.opensearch.common.util.FeatureFlags.TIERED_REMOTE_INDEX; import static org.opensearch.core.common.util.CollectionUtils.iterableAsArrayList; import static org.opensearch.index.store.remote.filecache.FileCacheSettings.DATA_TO_FILE_CACHE_SIZE_RATIO_SETTING; import static org.opensearch.test.NodeRoles.clusterManagerOnlyNode; import static org.opensearch.test.NodeRoles.dataNode; +import static org.opensearch.test.NodeRoles.onlyRole; +import static org.opensearch.test.NodeRoles.onlyRoles; import static org.opensearch.test.hamcrest.OpenSearchAssertions.assertAcked; import static org.hamcrest.Matchers.contains; import static org.hamcrest.Matchers.containsString; @@ -1009,6 +1014,26 @@ public void cleanup() throws Exception { ); } + public void testStartSearchNode() throws Exception { + // test start dedicated search node + internalCluster().startNode(Settings.builder().put(onlyRole(DiscoveryNodeRole.SEARCH_ROLE))); + // test start node without search role + internalCluster().startNode(Settings.builder().put(onlyRole(DiscoveryNodeRole.DATA_ROLE))); + // test start non-dedicated search node with TIERED_REMOTE_INDEX feature enabled + internalCluster().startNode( + Settings.builder() + .put(onlyRoles(Set.of(DiscoveryNodeRole.SEARCH_ROLE, DiscoveryNodeRole.DATA_ROLE))) + .put(TIERED_REMOTE_INDEX, true) + ); + // test start non-dedicated search node + assertThrows( + SettingsException.class, + () -> internalCluster().startNode( + Settings.builder().put(onlyRoles(Set.of(DiscoveryNodeRole.SEARCH_ROLE, DiscoveryNodeRole.DATA_ROLE))) + ) + ); + } + private void assertSearchableSnapshotIndexDirectoryExistence(String nodeName, Index index, boolean exists) throws Exception { final Node node = internalCluster().getInstance(Node.class, nodeName); final ShardId shardId = new ShardId(index, 0); diff --git a/server/src/main/java/org/apache/lucene/codecs/lucene90/Composite99DocValuesConsumer.java b/server/src/main/java/org/apache/lucene/codecs/lucene90/Composite99DocValuesConsumer.java new file mode 100644 index 0000000000000..04a49d517ff81 --- /dev/null +++ b/server/src/main/java/org/apache/lucene/codecs/lucene90/Composite99DocValuesConsumer.java @@ -0,0 +1,68 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.apache.lucene.codecs.lucene90; + +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.SegmentWriteState; + +import java.io.IOException; + +/** + * This class is an abstraction of the {@link DocValuesConsumer} for the Star Tree index structure. + * It is responsible to consume various types of document values (numeric, binary, sorted, sorted numeric, + * and sorted set) for fields in the Star Tree index. + * + * @opensearch.experimental + */ +public class Composite99DocValuesConsumer extends DocValuesConsumer { + + Lucene90DocValuesConsumer lucene90DocValuesConsumer; + + public Composite99DocValuesConsumer( + SegmentWriteState state, + String dataCodec, + String dataExtension, + String metaCodec, + String metaExtension + ) throws IOException { + lucene90DocValuesConsumer = new Lucene90DocValuesConsumer(state, dataCodec, dataExtension, metaCodec, metaExtension); + } + + @Override + public void close() throws IOException { + this.lucene90DocValuesConsumer.close(); + } + + @Override + public void addNumericField(FieldInfo fieldInfo, DocValuesProducer docValuesProducer) throws IOException { + lucene90DocValuesConsumer.addNumericField(fieldInfo, docValuesProducer); + } + + @Override + public void addBinaryField(FieldInfo fieldInfo, DocValuesProducer docValuesProducer) throws IOException { + lucene90DocValuesConsumer.addNumericField(fieldInfo, docValuesProducer); + } + + @Override + public void addSortedField(FieldInfo fieldInfo, DocValuesProducer docValuesProducer) throws IOException { + lucene90DocValuesConsumer.addSortedField(fieldInfo, docValuesProducer); + } + + @Override + public void addSortedNumericField(FieldInfo fieldInfo, DocValuesProducer docValuesProducer) throws IOException { + lucene90DocValuesConsumer.addSortedNumericField(fieldInfo, docValuesProducer); + } + + @Override + public void addSortedSetField(FieldInfo fieldInfo, DocValuesProducer docValuesProducer) throws IOException { + lucene90DocValuesConsumer.addSortedSetField(fieldInfo, docValuesProducer); + } +} diff --git a/server/src/main/java/org/apache/lucene/codecs/lucene90/StarTree99DocValuesProducer.java b/server/src/main/java/org/apache/lucene/codecs/lucene90/StarTree99DocValuesProducer.java new file mode 100644 index 0000000000000..154f4626bc472 --- /dev/null +++ b/server/src/main/java/org/apache/lucene/codecs/lucene90/StarTree99DocValuesProducer.java @@ -0,0 +1,158 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.apache.lucene.codecs.lucene90; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.BinaryDocValues; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.FieldInfos; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.NumericDocValues; +import org.apache.lucene.index.SegmentReadState; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.SortedSetDocValues; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricEntry; + +import java.io.IOException; +import java.util.Collections; +import java.util.List; + +import static org.opensearch.index.compositeindex.datacube.startree.utils.StarTreeHelper.fullFieldNameForStarTreeDimensionsDocValues; +import static org.opensearch.index.compositeindex.datacube.startree.utils.StarTreeHelper.fullFieldNameForStarTreeMetricsDocValues; + +/** + * This class is a custom abstraction of the {@link DocValuesProducer} for the Star Tree index structure. + * It is responsible for providing access to various types of document values (numeric, binary, sorted, sorted numeric, + * and sorted set) for fields in the Star Tree index. + * + * @opensearch.experimental + */ +public class StarTree99DocValuesProducer extends DocValuesProducer { + + Lucene90DocValuesProducer lucene90DocValuesProducer; + private final List dimensions; + private final List metrics; + private final FieldInfos fieldInfos; + + public StarTree99DocValuesProducer( + SegmentReadState state, + String dataCodec, + String dataExtension, + String metaCodec, + String metaExtension, + List dimensions, + List metricEntries, + String compositeFieldName + ) throws IOException { + this.dimensions = dimensions; + this.metrics = metricEntries; + + // populates the dummy list of field infos to fetch doc id set iterators for respective fields. + this.fieldInfos = new FieldInfos(getFieldInfoList(compositeFieldName)); + SegmentReadState segmentReadState = new SegmentReadState(state.directory, state.segmentInfo, fieldInfos, state.context); + lucene90DocValuesProducer = new Lucene90DocValuesProducer(segmentReadState, dataCodec, dataExtension, metaCodec, metaExtension); + } + + @Override + public NumericDocValues getNumeric(FieldInfo field) throws IOException { + return this.lucene90DocValuesProducer.getNumeric(field); + } + + @Override + public BinaryDocValues getBinary(FieldInfo field) throws IOException { + return this.lucene90DocValuesProducer.getBinary(field); + } + + @Override + public SortedDocValues getSorted(FieldInfo field) throws IOException { + return this.lucene90DocValuesProducer.getSorted(field); + } + + @Override + public SortedNumericDocValues getSortedNumeric(FieldInfo field) throws IOException { + return this.lucene90DocValuesProducer.getSortedNumeric(field); + } + + @Override + public SortedSetDocValues getSortedSet(FieldInfo field) throws IOException { + return this.lucene90DocValuesProducer.getSortedSet(field); + } + + @Override + public void checkIntegrity() throws IOException { + this.lucene90DocValuesProducer.checkIntegrity(); + } + + // returns the doc id set iterator based on field name + public SortedNumericDocValues getSortedNumeric(String fieldName) throws IOException { + return this.lucene90DocValuesProducer.getSortedNumeric(fieldInfos.fieldInfo(fieldName)); + } + + @Override + public void close() throws IOException { + this.lucene90DocValuesProducer.close(); + } + + private FieldInfo[] getFieldInfoList(String compositeFieldName) { + FieldInfo[] fieldInfoList = new FieldInfo[this.dimensions.size() + metrics.size()]; + // field number is not really used. We depend on unique field names to get the desired iterator + int fieldNumber = 0; + + for (FieldInfo dimension : this.dimensions) { + fieldInfoList[fieldNumber] = new FieldInfo( + fullFieldNameForStarTreeDimensionsDocValues(compositeFieldName, dimension.getName()), + fieldNumber, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + fieldNumber++; + } + for (MetricEntry metric : metrics) { + fieldInfoList[fieldNumber] = new FieldInfo( + fullFieldNameForStarTreeMetricsDocValues(compositeFieldName, metric.getMetricName(), metric.getMetricStat().getTypeName()), + fieldNumber, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + fieldNumber++; + } + return fieldInfoList; + } + +} diff --git a/server/src/main/java/org/apache/lucene/index/BaseStarTreeBuilder.java b/server/src/main/java/org/apache/lucene/index/BaseStarTreeBuilder.java new file mode 100644 index 0000000000000..a9f55339ca3ad --- /dev/null +++ b/server/src/main/java/org/apache/lucene/index/BaseStarTreeBuilder.java @@ -0,0 +1,818 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.apache.lucene.index; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.store.IndexOutput; +import org.apache.lucene.util.Counter; +import org.apache.lucene.util.NumericUtils; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricAggregatorInfo; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.ValueAggregator; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.index.compositeindex.datacube.startree.builder.StarTreeBuilder; +import org.opensearch.index.compositeindex.datacube.startree.builder.StarTreesBuilder; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; +import org.opensearch.index.compositeindex.datacube.startree.utils.StarTreeBuilderUtils; +import org.opensearch.index.fielddata.IndexNumericFieldData; +import org.opensearch.index.mapper.Mapper; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.NumberFieldMapper; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; + +import static org.opensearch.index.compositeindex.datacube.startree.utils.StarTreeHelper.fullFieldNameForStarTreeDimensionsDocValues; +import static org.opensearch.index.compositeindex.datacube.startree.utils.StarTreeHelper.fullFieldNameForStarTreeMetricsDocValues; + +/** + * Builder for star tree. Defines the algorithm to construct star-tree + * See {@link StarTreesBuilder} for information around the construction of star-trees based on star-tree fields + * + * @opensearch.experimental + */ +public abstract class BaseStarTreeBuilder implements StarTreeBuilder { + + private static final Logger logger = LogManager.getLogger(BaseStarTreeBuilder.class); + + /** + * Default value for star node + */ + public static final Long STAR_IN_DOC_VALUES_INDEX = null; + + protected final Set skipStarNodeCreationForDimensions; + + protected final List metricAggregatorInfos; + protected final int numMetrics; + protected final int numDimensions; + protected int numStarTreeDocs; + protected int totalSegmentDocs; + protected int numStarTreeNodes; + protected final int maxLeafDocuments; + + protected final StarTreeBuilderUtils.TreeNode rootNode = getNewNode(); + private final StarTreeField starTreeField; + + private final MapperService mapperService; + private final SegmentWriteState writeState; + + private final IndexOutput metaOut; + private final IndexOutput dataOut; + + /** + * Builds star tree based on star tree field configuration consisting of dimensions, metrics and star tree index specific configuration. + * + * @param starTreeField holds the configuration for the star tree + * @param writeState stores the segment write writeState + * @param mapperService helps to find the original type of the field + */ + protected BaseStarTreeBuilder( + IndexOutput metaOut, + IndexOutput dataOut, + StarTreeField starTreeField, + SegmentWriteState writeState, + MapperService mapperService + ) throws IOException { + + logger.debug("Building star tree : {}", starTreeField); + + this.metaOut = metaOut; + this.dataOut = dataOut; + + this.starTreeField = starTreeField; + StarTreeFieldConfiguration starTreeFieldSpec = starTreeField.getStarTreeConfig(); + + List dimensionsSplitOrder = starTreeField.getDimensionsOrder(); + this.numDimensions = dimensionsSplitOrder.size(); + + this.skipStarNodeCreationForDimensions = new HashSet<>(); + this.totalSegmentDocs = writeState.segmentInfo.maxDoc(); + this.mapperService = mapperService; + this.writeState = writeState; + + Set skipStarNodeCreationForDimensions = starTreeFieldSpec.getSkipStarNodeCreationInDims(); + + for (int i = 0; i < numDimensions; i++) { + if (skipStarNodeCreationForDimensions.contains(dimensionsSplitOrder.get(i).getField())) { + this.skipStarNodeCreationForDimensions.add(i); + } + } + + this.metricAggregatorInfos = generateMetricAggregatorInfos(mapperService); + this.numMetrics = metricAggregatorInfos.size(); + this.maxLeafDocuments = starTreeFieldSpec.maxLeafDocs(); + } + + /** + * Generates the configuration required to perform aggregation for all the metrics on a field + * + * @return list of MetricAggregatorInfo + */ + public List generateMetricAggregatorInfos(MapperService mapperService) { + List metricAggregatorInfos = new ArrayList<>(); + for (Metric metric : this.starTreeField.getMetrics()) { + for (MetricStat metricType : metric.getMetrics()) { + IndexNumericFieldData.NumericType numericType; + Mapper fieldMapper = mapperService.documentMapper().mappers().getMapper(metric.getField()); + if (fieldMapper instanceof NumberFieldMapper) { + numericType = ((NumberFieldMapper) fieldMapper).fieldType().numericType(); + } else { + logger.error("unsupported mapper type"); + throw new IllegalStateException("unsupported mapper type"); + } + + MetricAggregatorInfo metricAggregatorInfo = new MetricAggregatorInfo( + metricType, + metric.getField(), + starTreeField.getName(), + numericType + ); + metricAggregatorInfos.add(metricAggregatorInfo); + } + } + return metricAggregatorInfos; + } + + /** + * Generates the configuration required to perform aggregation for all the metrics on a field + * + * @return list of MetricAggregatorInfo + */ + public List getMetricReaders(SegmentWriteState state, Map fieldProducerMap) + throws IOException { + + List metricReaders = new ArrayList<>(); + for (Metric metric : this.starTreeField.getMetrics()) { + for (MetricStat metricType : metric.getMetrics()) { + SequentialDocValuesIterator metricReader; + FieldInfo metricFieldInfo = state.fieldInfos.fieldInfo(metric.getField()); + if (metricType != MetricStat.COUNT) { + // Need not initialize the metric reader with relevant doc id set iterator for COUNT metric type + metricReader = new SequentialDocValuesIterator( + fieldProducerMap.get(metricFieldInfo.name).getSortedNumeric(metricFieldInfo) + ); + } else { + metricReader = new SequentialDocValuesIterator(); + } + + metricReaders.add(metricReader); + } + } + return metricReaders; + } + + /** + * Builds the star tree from the original segment documents + * + * @param fieldProducerMap contain s the docValues producer to get docValues associated with each field + * @param fieldNumberAcrossStarTrees + * @param starTreeDocValuesConsumer + * @throws IOException when we are unable to build star-tree + */ + public void build( + Map fieldProducerMap, + AtomicInteger fieldNumberAcrossStarTrees, + DocValuesConsumer starTreeDocValuesConsumer + ) throws IOException { + long startTime = System.currentTimeMillis(); + logger.debug("Star-tree build is a go with star tree field {}", starTreeField.getName()); + + List metricReaders = getMetricReaders(writeState, fieldProducerMap); + List dimensionsSplitOrder = starTreeField.getDimensionsOrder(); + SequentialDocValuesIterator[] dimensionReaders = new SequentialDocValuesIterator[dimensionsSplitOrder.size()]; + for (int i = 0; i < numDimensions; i++) { + String dimension = dimensionsSplitOrder.get(i).getField(); + FieldInfo dimensionFieldInfo = writeState.fieldInfos.fieldInfo(dimension); + dimensionReaders[i] = new SequentialDocValuesIterator( + fieldProducerMap.get(dimensionFieldInfo.name).getSortedNumeric(dimensionFieldInfo) + ); + } + Iterator starTreeDocumentIterator = sortAndAggregateSegmentDocuments( + totalSegmentDocs, + dimensionReaders, + metricReaders + ); + logger.debug("Sorting and aggregating star-tree in ms : {}", (System.currentTimeMillis() - startTime)); + build(starTreeDocumentIterator, fieldNumberAcrossStarTrees, starTreeDocValuesConsumer); + logger.debug("Finished Building star-tree in ms : {}", (System.currentTimeMillis() - startTime)); + } + + /** + * Builds the star tree using sorted and aggregated star-tree Documents + * + * @param starTreeDocumentIterator contains the sorted and aggregated documents + * @param fieldNumberAcrossStarTrees + * @param starTreeDocValuesConsumer + * @throws IOException when we are unable to build star-tree + */ + public void build( + Iterator starTreeDocumentIterator, + AtomicInteger fieldNumberAcrossStarTrees, + DocValuesConsumer starTreeDocValuesConsumer + ) throws IOException { + int numSegmentStarTreeDocument = totalSegmentDocs; + + while (starTreeDocumentIterator.hasNext()) { + appendToStarTree(starTreeDocumentIterator.next()); + } + int numStarTreeDocument = numStarTreeDocs; + logger.debug("Generated star tree docs : [{}] from segment docs : [{}]", numStarTreeDocument, numSegmentStarTreeDocument); + + if (numStarTreeDocs == 0) { + // serialize the star tree data + serializeStarTree(numSegmentStarTreeDocument); + return; + } + + constructStarTree(rootNode, 0, numStarTreeDocs); + int numStarTreeDocumentUnderStarNode = numStarTreeDocs - numStarTreeDocument; + logger.debug( + "Finished constructing star-tree, got [ {} ] tree nodes and [ {} ] starTreeDocument under star-node", + numStarTreeNodes, + numStarTreeDocumentUnderStarNode + ); + + createAggregatedDocs(rootNode); + int numAggregatedStarTreeDocument = numStarTreeDocs - numStarTreeDocument - numStarTreeDocumentUnderStarNode; + logger.debug("Finished creating aggregated documents : {}", numAggregatedStarTreeDocument); + + // Create doc values indices in disk + createSortedDocValuesIndices(starTreeDocValuesConsumer, fieldNumberAcrossStarTrees); + + // serialize star-tree + serializeStarTree(numSegmentStarTreeDocument); + } + + private void serializeStarTree(int numSegmentStarTreeDocument) throws IOException { + // serialize the star tree data + long dataFilePointer = dataOut.getFilePointer(); + long totalStarTreeDataLength = StarTreeBuilderUtils.serializeStarTree(dataOut, rootNode, numStarTreeNodes); + + // serialize the star tree meta + StarTreeBuilderUtils.serializeStarTreeMetadata( + metaOut, + starTreeField, + writeState, + metricAggregatorInfos, + numSegmentStarTreeDocument, + dataFilePointer, + totalStarTreeDataLength + ); + } + + private void createSortedDocValuesIndices(DocValuesConsumer docValuesConsumer, AtomicInteger fieldNumberAcrossStarTrees) + throws IOException { + List dimensionWriters = new ArrayList<>(); + List metricWriters = new ArrayList<>(); + FieldInfo[] dimensionFieldInfoList = new FieldInfo[starTreeField.getDimensionsOrder().size()]; + FieldInfo[] metricFieldInfoList = new FieldInfo[metricAggregatorInfos.size()]; + + for (int i = 0; i < dimensionFieldInfoList.length; i++) { + final FieldInfo fi = new FieldInfo( + fullFieldNameForStarTreeDimensionsDocValues(starTreeField.getName(), starTreeField.getDimensionsOrder().get(i).getField()), + fieldNumberAcrossStarTrees.incrementAndGet(), + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + dimensionFieldInfoList[i] = fi; + dimensionWriters.add(new SortedNumericDocValuesWriter(fi, Counter.newCounter())); + } + for (int i = 0; i < metricAggregatorInfos.size(); i++) { + FieldInfo fi = new FieldInfo( + fullFieldNameForStarTreeMetricsDocValues( + starTreeField.getName(), + metricAggregatorInfos.get(i).getField(), + metricAggregatorInfos.get(i).getMetricStat().getTypeName() + ), + fieldNumberAcrossStarTrees.incrementAndGet(), + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + metricFieldInfoList[i] = fi; + metricWriters.add(new SortedNumericDocValuesWriter(fi, Counter.newCounter())); + } + + for (int docId = 0; docId < numStarTreeDocs; docId++) { + StarTreeDocument starTreeDocument = getStarTreeDocument(docId); + for (int i = 0; i < starTreeDocument.dimensions.length; i++) { + Long val = starTreeDocument.dimensions[i]; + if (val != null) { + dimensionWriters.get(i).addValue(docId, val); + } + } + + for (int i = 0; i < starTreeDocument.metrics.length; i++) { + try { + switch (metricAggregatorInfos.get(i).getValueAggregators().getAggregatedValueType()) { + case LONG: + metricWriters.get(i).addValue(docId, (Long) starTreeDocument.metrics[i]); + break; + case DOUBLE: + metricWriters.get(i).addValue(docId, NumericUtils.doubleToSortableLong((Double) starTreeDocument.metrics[i])); + break; + default: + throw new IllegalStateException("Unknown metric doc value type"); + } + } catch (IllegalArgumentException e) { + logger.info("could not parse the value, exiting creation of star tree"); + } + } + } + + addStarTreeDocValueFields(docValuesConsumer, dimensionWriters, dimensionFieldInfoList, starTreeField.getDimensionsOrder().size()); + addStarTreeDocValueFields(docValuesConsumer, metricWriters, metricFieldInfoList, metricAggregatorInfos.size()); + } + + private void addStarTreeDocValueFields( + DocValuesConsumer docValuesConsumer, + List docValuesWriters, + FieldInfo[] fieldInfoList, + int fieldCount + ) throws IOException { + for (int i = 0; i < fieldCount; i++) { + final int increment = i; + DocValuesProducer docValuesProducer = new EmptyDocValuesProducer() { + @Override + public SortedNumericDocValues getSortedNumeric(FieldInfo field) { + return docValuesWriters.get(increment).getDocValues(); + } + }; + docValuesConsumer.addSortedNumericField(fieldInfoList[i], docValuesProducer); + } + } + + /** + * Adds a document to the star-tree. + * + * @param starTreeDocument star tree document to be added + * @throws IOException if an I/O error occurs while adding the document + */ + public abstract void appendStarTreeDocument(StarTreeDocument starTreeDocument) throws IOException; + + /** + * Returns the document of the given document id in the star-tree. + * + * @param docId document id + * @return star tree document + * @throws IOException if an I/O error occurs while fetching the star-tree document + */ + public abstract StarTreeDocument getStarTreeDocument(int docId) throws IOException; + + /** + * Retrieves the list of star-tree documents in the star-tree. + * + * @return Star tree documents + */ + public abstract List getStarTreeDocuments(); + + /** + * Returns the value of the dimension for the given dimension id and document in the star-tree. + * + * @param docId document id + * @param dimensionId dimension id + * @return dimension value + */ + public abstract Long getDimensionValue(int docId, int dimensionId) throws IOException; + + /** + * Sorts and aggregates all the documents in the segment as per the configuration, and returns a star-tree document iterator for all the + * aggregated star-tree documents. + * + * @param numDocs number of documents in the given segment + * @param dimensionReaders List of docValues readers to read dimensions from the segment + * @param metricReaders List of docValues readers to read metrics from the segment + * @return Iterator for the aggregated star-tree document + */ + public abstract Iterator sortAndAggregateSegmentDocuments( + int numDocs, + SequentialDocValuesIterator[] dimensionReaders, + List metricReaders + ) throws IOException; + + /** + * Generates aggregated star-tree documents for star-node. This inserts null for the star-node dimension and generates aggregated documents based on the remaining dimensions + * + * @param startDocId start document id (inclusive) in the star-tree + * @param endDocId end document id (exclusive) in the star-tree + * @param dimensionId dimension id of the star-node + * @return Iterator for the aggregated star-tree documents + */ + public abstract Iterator generateStarTreeDocumentsForStarNode(int startDocId, int endDocId, int dimensionId) + throws IOException; + + protected StarTreeDocument getSegmentStarTreeDocument( + int currentDocId, + SequentialDocValuesIterator[] dimensionReaders, + List metricReaders + ) throws IOException { + Long[] dimensions = getStarTreeDimensionsFromSegment(currentDocId, dimensionReaders); + Object[] metrics = getStarTreeMetricsFromSegment(currentDocId, metricReaders); + return new StarTreeDocument(dimensions, metrics); + } + + /** + * Returns the dimension values for the next document from the segment + * + * @return dimension values for each of the star-tree dimension + * @throws IOException when we are unable to iterate to the next doc for the given dimension readers + */ + Long[] getStarTreeDimensionsFromSegment(int currentDocId, SequentialDocValuesIterator[] dimensionReaders) throws IOException { + Long[] dimensions = new Long[numDimensions]; + for (int i = 0; i < numDimensions; i++) { + if (dimensionReaders[i] != null) { + try { + dimensionReaders[i].nextDoc(currentDocId); + } catch (IOException e) { + logger.error("unable to iterate to next doc", e); + throw new RuntimeException("unable to iterate to next doc", e); + } catch (Exception e) { + logger.error("unable to read the dimension values from the segment", e); + throw new IllegalStateException("unable to read the dimension values from the segment", e); + } + dimensions[i] = dimensionReaders[i].value(currentDocId); + } else { + throw new IllegalStateException("dimension readers are empty"); + } + } + return dimensions; + } + + /** + * Returns the metric values for the next document from the segment + * + * @return metric values for each of the star-tree metric + * @throws IOException when we are unable to iterate to the next doc for the given metric readers + */ + private Object[] getStarTreeMetricsFromSegment(int currentDocId, List metricsReaders) throws IOException { + Object[] metrics = new Object[numMetrics]; + for (int i = 0; i < numMetrics; i++) { + SequentialDocValuesIterator metricStatReader = metricsReaders.get(i); + if (metricStatReader != null) { + try { + metricStatReader.nextDoc(currentDocId); + } catch (IOException e) { + logger.error("unable to iterate to next doc", e); + throw new RuntimeException("unable to iterate to next doc", e); + } catch (Exception e) { + logger.error("unable to read the metric values from the segment", e); + throw new IllegalStateException("unable to read the metric values from the segment", e); + } + metrics[i] = metricStatReader.value(currentDocId); + } else { + throw new IllegalStateException("metric readers are empty"); + } + } + return metrics; + } + + /** + * Merges a star-tree document from the segment into an aggregated star-tree document. + * A new aggregated star-tree document is created if the aggregated segment document is null. + * + * @param aggregatedSegmentDocument aggregated star-tree document + * @param segmentDocument segment star-tree document + * @return merged star-tree document + */ + protected StarTreeDocument reduceSegmentStarTreeDocuments( + StarTreeDocument aggregatedSegmentDocument, + StarTreeDocument segmentDocument + ) { + if (aggregatedSegmentDocument == null) { + Long[] dimensions = Arrays.copyOf(segmentDocument.dimensions, numDimensions); + Object[] metrics = new Object[numMetrics]; + for (int i = 0; i < numMetrics; i++) { + try { + ValueAggregator metricValueAggregator = metricAggregatorInfos.get(i).getValueAggregators(); + StarTreeNumericType starTreeNumericType = metricAggregatorInfos.get(i).getAggregatedValueType(); + metrics[i] = metricValueAggregator.getInitialAggregatedValueForSegmentDocValue( + getLong(segmentDocument.metrics[i]), + starTreeNumericType + ); + } catch (Exception e) { + logger.error("Cannot parse initial segment doc value", e); + throw new IllegalStateException("Cannot parse initial segment doc value [" + segmentDocument.metrics[i] + "]"); + } + } + return new StarTreeDocument(dimensions, metrics); + } else { + for (int i = 0; i < numMetrics; i++) { + try { + ValueAggregator metricValueAggregator = metricAggregatorInfos.get(i).getValueAggregators(); + StarTreeNumericType starTreeNumericType = metricAggregatorInfos.get(i).getAggregatedValueType(); + aggregatedSegmentDocument.metrics[i] = metricValueAggregator.mergeAggregatedValueAndSegmentValue( + aggregatedSegmentDocument.metrics[i], + getLong(segmentDocument.metrics[i]), + starTreeNumericType + ); + } catch (Exception e) { + logger.error("Cannot apply segment doc value for aggregation", e); + throw new IllegalStateException("Cannot apply segment doc value for aggregation [" + segmentDocument.metrics[i] + "]"); + } + } + return aggregatedSegmentDocument; + } + } + + /** + * Safely converts the metric value of object type to long. + * + * @param metric value of the metric + * @return converted metric value to long + */ + private static long getLong(Object metric) { + + Long metricValue = null; + try { + if (metric instanceof Long) { + metricValue = (long) metric; + } else if (metric != null) { + metricValue = Long.valueOf(String.valueOf(metric)); + } + } catch (Exception e) { + throw new IllegalStateException("unable to cast segment metric", e); + } + + if (metricValue == null) { + throw new IllegalStateException("unable to cast segment metric"); + } + return metricValue; + } + + /** + * Merges a star-tree document into an aggregated star-tree document. + * A new aggregated star-tree document is created if the aggregated document is null. + * + * @param aggregatedDocument aggregated star-tree document + * @param starTreeDocument segment star-tree document + * @return merged star-tree document + */ + public StarTreeDocument reduceStarTreeDocuments(StarTreeDocument aggregatedDocument, StarTreeDocument starTreeDocument) { + // aggregate the documents + if (aggregatedDocument == null) { + Long[] dimensions = Arrays.copyOf(starTreeDocument.dimensions, numDimensions); + Object[] metrics = new Object[numMetrics]; + for (int i = 0; i < numMetrics; i++) { + try { + metrics[i] = metricAggregatorInfos.get(i).getValueAggregators().getInitialAggregatedValue(starTreeDocument.metrics[i]); + } catch (Exception e) { + logger.error("Cannot get value for aggregation", e); + throw new IllegalStateException("Cannot get value for aggregation[" + starTreeDocument.metrics[i] + "]"); + } + } + return new StarTreeDocument(dimensions, metrics); + } else { + for (int i = 0; i < numMetrics; i++) { + try { + aggregatedDocument.metrics[i] = metricAggregatorInfos.get(i) + .getValueAggregators() + .mergeAggregatedValues(starTreeDocument.metrics[i], aggregatedDocument.metrics[i]); + } catch (Exception e) { + logger.error("Cannot apply value to aggregated document for aggregation", e); + throw new IllegalStateException( + "Cannot apply value to aggregated document for aggregation [" + starTreeDocument.metrics[i] + "]" + ); + } + } + return aggregatedDocument; + } + } + + /** + * Adds a document to star-tree + * + * @param starTreeDocument star-tree document + * @throws IOException throws an exception if we are unable to add the doc + */ + private void appendToStarTree(StarTreeDocument starTreeDocument) throws IOException { + appendStarTreeDocument(starTreeDocument); + numStarTreeDocs++; + } + + /** + * Returns a new star-tree node + * + * @return return new star-tree node + */ + private StarTreeBuilderUtils.TreeNode getNewNode() { + numStarTreeNodes++; + return new StarTreeBuilderUtils.TreeNode(); + } + + /** + * Implements the algorithm to construct a star-tree + * + * @param node star-tree node + * @param startDocId start document id + * @param endDocId end document id + * @throws IOException throws an exception if we are unable to construct the tree + */ + private void constructStarTree(StarTreeBuilderUtils.TreeNode node, int startDocId, int endDocId) throws IOException { + + int childDimensionId = node.dimensionId + 1; + if (childDimensionId == numDimensions) { + return; + } + + // Construct all non-star children nodes + node.childDimensionId = childDimensionId; + Map children = constructNonStarNodes(startDocId, endDocId, childDimensionId); + node.children = children; + + // Construct star-node if required + if (!skipStarNodeCreationForDimensions.contains(childDimensionId) && children.size() > 1) { + children.put((long) StarTreeBuilderUtils.ALL, constructStarNode(startDocId, endDocId, childDimensionId)); + } + + // Further split on child nodes if required + for (StarTreeBuilderUtils.TreeNode child : children.values()) { + if (child.endDocId - child.startDocId > maxLeafDocuments) { + constructStarTree(child, child.startDocId, child.endDocId); + } + } + } + + /** + * Constructs non star tree nodes + * + * @param startDocId start document id (inclusive) + * @param endDocId end document id (exclusive) + * @param dimensionId id of the dimension in the star tree + * @return root node with non-star nodes constructed + * @throws IOException throws an exception if we are unable to construct non-star nodes + */ + private Map constructNonStarNodes(int startDocId, int endDocId, int dimensionId) + throws IOException { + Map nodes = new HashMap<>(); + int nodeStartDocId = startDocId; + Long nodeDimensionValue = getDimensionValue(startDocId, dimensionId); + for (int i = startDocId + 1; i < endDocId; i++) { + Long dimensionValue = getDimensionValue(i, dimensionId); + if (!dimensionValue.equals(nodeDimensionValue)) { + StarTreeBuilderUtils.TreeNode child = getNewNode(); + child.dimensionId = dimensionId; + child.dimensionValue = nodeDimensionValue; + child.startDocId = nodeStartDocId; + child.endDocId = i; + nodes.put(nodeDimensionValue, child); + + nodeStartDocId = i; + nodeDimensionValue = dimensionValue; + } + } + StarTreeBuilderUtils.TreeNode lastNode = getNewNode(); + lastNode.dimensionId = dimensionId; + lastNode.dimensionValue = nodeDimensionValue; + lastNode.startDocId = nodeStartDocId; + lastNode.endDocId = endDocId; + nodes.put(nodeDimensionValue, lastNode); + return nodes; + } + + /** + * Constructs star tree nodes + * + * @param startDocId start document id (inclusive) + * @param endDocId end document id (exclusive) + * @param dimensionId id of the dimension in the star tree + * @return root node with star nodes constructed + * @throws IOException throws an exception if we are unable to construct non-star nodes + */ + private StarTreeBuilderUtils.TreeNode constructStarNode(int startDocId, int endDocId, int dimensionId) throws IOException { + StarTreeBuilderUtils.TreeNode starNode = getNewNode(); + starNode.dimensionId = dimensionId; + starNode.dimensionValue = StarTreeBuilderUtils.ALL; + starNode.isStarNode = true; + starNode.startDocId = numStarTreeDocs; + Iterator starTreeDocumentIterator = generateStarTreeDocumentsForStarNode(startDocId, endDocId, dimensionId); + while (starTreeDocumentIterator.hasNext()) { + appendToStarTree(starTreeDocumentIterator.next()); + } + starNode.endDocId = numStarTreeDocs; + return starNode; + } + + /** + * Returns aggregated star-tree document + * + * @param node star-tree node + * @return aggregated star-tree documents + * @throws IOException throws an exception upon failing to create new aggregated docs based on star tree + */ + private StarTreeDocument createAggregatedDocs(StarTreeBuilderUtils.TreeNode node) throws IOException { + StarTreeDocument aggregatedStarTreeDocument = null; + if (node.children == null) { + // For leaf node + + if (node.startDocId == node.endDocId - 1) { + // If it has only one document, use it as the aggregated document + aggregatedStarTreeDocument = getStarTreeDocument(node.startDocId); + node.aggregatedDocId = node.startDocId; + } else { + // If it has multiple documents, aggregate all of them + for (int i = node.startDocId; i < node.endDocId; i++) { + aggregatedStarTreeDocument = reduceStarTreeDocuments(aggregatedStarTreeDocument, getStarTreeDocument(i)); + } + if (null == aggregatedStarTreeDocument) { + throw new IllegalStateException("aggregated star-tree document is null after reducing the documents"); + } + for (int i = node.dimensionId + 1; i < numDimensions; i++) { + aggregatedStarTreeDocument.dimensions[i] = STAR_IN_DOC_VALUES_INDEX; + } + node.aggregatedDocId = numStarTreeDocs; + appendToStarTree(aggregatedStarTreeDocument); + } + } else { + // For non-leaf node + if (node.children.containsKey((long) StarTreeBuilderUtils.ALL)) { + // If it has star child, use the star child aggregated document directly + for (StarTreeBuilderUtils.TreeNode child : node.children.values()) { + if (child.isStarNode) { + aggregatedStarTreeDocument = createAggregatedDocs(child); + node.aggregatedDocId = child.aggregatedDocId; + } else { + createAggregatedDocs(child); + } + } + } else { + // If no star child exists, aggregate all aggregated documents from non-star children + for (StarTreeBuilderUtils.TreeNode child : node.children.values()) { + aggregatedStarTreeDocument = reduceStarTreeDocuments(aggregatedStarTreeDocument, createAggregatedDocs(child)); + } + if (null == aggregatedStarTreeDocument) { + throw new IllegalStateException("aggregated star-tree document is null after reducing the documents"); + } + for (int i = node.dimensionId + 1; i < numDimensions; i++) { + aggregatedStarTreeDocument.dimensions[i] = STAR_IN_DOC_VALUES_INDEX; + } + node.aggregatedDocId = numStarTreeDocs; + appendToStarTree(aggregatedStarTreeDocument); + } + } + return aggregatedStarTreeDocument; + } + + /** + * Handles the dimension of date time field type + * + * @param fieldName name of the field + * @param val value of the field + * @return returns the converted dimension of the field to a particular granularity + */ + private long handleDateDimension(final String fieldName, final long val) { + // TODO: handle timestamp granularity + return val; + } + + public void close() throws IOException { + + } + +} diff --git a/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java b/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java index a0ef8de07fbf2..e3f63b1c27b83 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/Metadata.java @@ -1287,6 +1287,7 @@ public Builder templates(Map templates) { } public Builder templates(TemplatesMetadata templatesMetadata) { + this.templates.clear(); this.templates.putAll(templatesMetadata.getTemplates()); return this; } diff --git a/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java b/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java index 16edec112f123..7973745ce84b3 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/MetadataCreateIndexService.java @@ -85,6 +85,7 @@ import org.opensearch.index.IndexNotFoundException; import org.opensearch.index.IndexService; import org.opensearch.index.IndexSettings; +import org.opensearch.index.compositeindex.CompositeIndexValidator; import org.opensearch.index.mapper.DocumentMapper; import org.opensearch.index.mapper.MapperService; import org.opensearch.index.mapper.MapperService.MergeReason; @@ -1318,6 +1319,10 @@ private static void updateIndexMappingsAndBuildSortOrder( } } + if (mapperService.isCompositeIndexPresent()) { + CompositeIndexValidator.validate(mapperService, indexService.getCompositeIndexSettings(), indexService.getIndexSettings()); + } + if (sourceMetadata == null) { // now that the mapping is merged we can validate the index sort. // we cannot validate for index shrinking since the mapping is empty diff --git a/server/src/main/java/org/opensearch/cluster/metadata/MetadataMappingService.java b/server/src/main/java/org/opensearch/cluster/metadata/MetadataMappingService.java index 1406287149e8d..43894db86c512 100644 --- a/server/src/main/java/org/opensearch/cluster/metadata/MetadataMappingService.java +++ b/server/src/main/java/org/opensearch/cluster/metadata/MetadataMappingService.java @@ -55,6 +55,7 @@ import org.opensearch.core.common.Strings; import org.opensearch.core.index.Index; import org.opensearch.index.IndexService; +import org.opensearch.index.compositeindex.CompositeIndexValidator; import org.opensearch.index.mapper.DocumentMapper; import org.opensearch.index.mapper.MapperService; import org.opensearch.index.mapper.MapperService.MergeReason; @@ -282,6 +283,7 @@ private ClusterState applyRequest( // first, simulate: just call merge and ignore the result existingMapper.merge(newMapper.mapping(), MergeReason.MAPPING_UPDATE); } + } Metadata.Builder builder = Metadata.builder(metadata); boolean updated = false; @@ -291,7 +293,7 @@ private ClusterState applyRequest( // we use the exact same indexService and metadata we used to validate above here to actually apply the update final Index index = indexMetadata.getIndex(); final MapperService mapperService = indexMapperServices.get(index); - + boolean isCompositeFieldPresent = !mapperService.getCompositeFieldTypes().isEmpty(); CompressedXContent existingSource = null; DocumentMapper existingMapper = mapperService.documentMapper(); if (existingMapper != null) { @@ -302,6 +304,14 @@ private ClusterState applyRequest( mappingUpdateSource, MergeReason.MAPPING_UPDATE ); + + CompositeIndexValidator.validate( + mapperService, + indicesService.getCompositeIndexSettings(), + mapperService.getIndexSettings(), + isCompositeFieldPresent + ); + CompressedXContent updatedSource = mergedMapper.mappingSource(); if (existingSource != null) { diff --git a/server/src/main/java/org/opensearch/cluster/node/DiscoveryNode.java b/server/src/main/java/org/opensearch/cluster/node/DiscoveryNode.java index 690621c2e7bca..653f81830ed17 100644 --- a/server/src/main/java/org/opensearch/cluster/node/DiscoveryNode.java +++ b/server/src/main/java/org/opensearch/cluster/node/DiscoveryNode.java @@ -130,6 +130,10 @@ public static boolean isSearchNode(Settings settings) { return hasRole(settings, DiscoveryNodeRole.SEARCH_ROLE); } + public static boolean isDedicatedSearchNode(Settings settings) { + return getRolesFromSettings(settings).stream().allMatch(DiscoveryNodeRole.SEARCH_ROLE::equals); + } + private final String nodeName; private final String nodeId; private final String ephemeralId; diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java index ec25d041bda43..6978c988fd648 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/allocator/LocalShardsBalancer.java @@ -32,7 +32,6 @@ import org.opensearch.gateway.PriorityComparator; import java.util.ArrayList; -import java.util.Arrays; import java.util.Collections; import java.util.Comparator; import java.util.HashMap; @@ -41,7 +40,6 @@ import java.util.List; import java.util.Map; import java.util.Set; -import java.util.stream.Collectors; import java.util.stream.Stream; import java.util.stream.StreamSupport; @@ -779,15 +777,16 @@ void allocateUnassigned() { * if we allocate for instance (0, R, IDX1) we move the second replica to the secondary array and proceed with * the next replica. If we could not find a node to allocate (0,R,IDX1) we move all it's replicas to ignoreUnassigned. */ - ShardRouting[] unassignedShards = unassigned.drain(); - List allUnassignedShards = Arrays.stream(unassignedShards).collect(Collectors.toList()); - List localUnassignedShards = allUnassignedShards.stream() - .filter(shard -> RoutingPool.LOCAL_ONLY.equals(RoutingPool.getShardPool(shard, allocation))) - .collect(Collectors.toList()); - allUnassignedShards.removeAll(localUnassignedShards); - allUnassignedShards.forEach(shard -> routingNodes.unassigned().add(shard)); - unassignedShards = localUnassignedShards.toArray(new ShardRouting[0]); - ShardRouting[] primary = unassignedShards; + List primaryList = new ArrayList<>(); + for (ShardRouting shard : unassigned.drain()) { + if (RoutingPool.LOCAL_ONLY.equals(RoutingPool.getShardPool(shard, allocation))) { + primaryList.add(shard); + } else { + routingNodes.unassigned().add(shard); + } + } + + ShardRouting[] primary = primaryList.toArray(new ShardRouting[0]); ShardRouting[] secondary = new ShardRouting[primary.length]; int secondaryLength = 0; int primaryLength = primary.length; diff --git a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/RemoteStoreMigrationAllocationDecider.java b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/RemoteStoreMigrationAllocationDecider.java index 4fc5fff805663..67fe4ea1dcb1b 100644 --- a/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/RemoteStoreMigrationAllocationDecider.java +++ b/server/src/main/java/org/opensearch/cluster/routing/allocation/decider/RemoteStoreMigrationAllocationDecider.java @@ -44,8 +44,6 @@ import org.opensearch.node.remotestore.RemoteStoreNodeService.CompatibilityMode; import org.opensearch.node.remotestore.RemoteStoreNodeService.Direction; -import java.util.Locale; - /** * A new allocation decider for migration of document replication clusters to remote store backed clusters: * - For STRICT compatibility mode, the decision is always YES @@ -101,7 +99,7 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing if (migrationDirection.equals(Direction.NONE)) { // remote backed indices on docrep nodes and non remote backed indices on remote nodes are not allowed boolean isNoDecision = remoteSettingsBackedIndex ^ targetNode.isRemoteStoreNode(); - String reason = String.format(Locale.ROOT, " for %sremote store backed index", remoteSettingsBackedIndex ? "" : "non "); + String reason = " for " + (remoteSettingsBackedIndex ? "" : "non ") + "remote store backed index"; return allocation.decision( isNoDecision ? Decision.NO : Decision.YES, NAME, @@ -114,11 +112,9 @@ public Decision canAllocate(ShardRouting shardRouting, RoutingNode node, Routing // check for remote store backed indices if (remoteSettingsBackedIndex && targetNode.isRemoteStoreNode() == false) { // allocations and relocations must be to a remote node - String reason = String.format( - Locale.ROOT, - " because a remote store backed index's shard copy can only be %s to a remote node", - ((shardRouting.assignedToNode() == false) ? "allocated" : "relocated") - ); + String reason = new StringBuilder(" because a remote store backed index's shard copy can only be ").append( + (shardRouting.assignedToNode() == false) ? "allocated" : "relocated" + ).append(" to a remote node").toString(); return allocation.decision(Decision.NO, NAME, getDecisionDetails(false, shardRouting, targetNode, reason)); } @@ -168,16 +164,18 @@ private Decision replicaShardDecision(ShardRouting replicaShardRouting, Discover // get detailed reason for the decision private String getDecisionDetails(boolean isYes, ShardRouting shardRouting, DiscoveryNode targetNode, String reason) { - return String.format( - Locale.ROOT, - "[%s migration_direction]: %s shard copy %s be %s to a %s node%s", - migrationDirection.direction, - (shardRouting.primary() ? "primary" : "replica"), - (isYes ? "can" : "can not"), - ((shardRouting.assignedToNode() == false) ? "allocated" : "relocated"), - (targetNode.isRemoteStoreNode() ? "remote" : "non-remote"), - reason - ); + return new StringBuilder("[").append(migrationDirection.direction) + .append(" migration_direction]: ") + .append(shardRouting.primary() ? "primary" : "replica") + .append(" shard copy ") + .append(isYes ? "can" : "can not") + .append(" be ") + .append((shardRouting.assignedToNode() == false) ? "allocated" : "relocated") + .append(" to a ") + .append(targetNode.isRemoteStoreNode() ? "remote" : "non-remote") + .append(" node") + .append(reason) + .toString(); } } diff --git a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java index 233a8d732d178..5dcf23ae52294 100644 --- a/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/ClusterSettings.java @@ -115,6 +115,7 @@ import org.opensearch.index.ShardIndexingPressureMemoryManager; import org.opensearch.index.ShardIndexingPressureSettings; import org.opensearch.index.ShardIndexingPressureStore; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.remote.RemoteStorePressureSettings; import org.opensearch.index.remote.RemoteStoreStatsTrackerFactory; import org.opensearch.index.store.remote.filecache.FileCacheSettings; @@ -754,7 +755,10 @@ public void apply(Settings value, Settings current, Settings previous) { RemoteStoreSettings.CLUSTER_REMOTE_STORE_PATH_HASH_ALGORITHM_SETTING, RemoteStoreSettings.CLUSTER_REMOTE_MAX_TRANSLOG_READERS, RemoteStoreSettings.CLUSTER_REMOTE_STORE_TRANSLOG_METADATA, - SearchService.CLUSTER_ALLOW_DERIVED_FIELD_SETTING + SearchService.CLUSTER_ALLOW_DERIVED_FIELD_SETTING, + + // Composite index settings + CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING ) ) ); diff --git a/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java b/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java index 238df1bd90113..b6166f5d3cce1 100644 --- a/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/FeatureFlagSettings.java @@ -37,6 +37,7 @@ protected FeatureFlagSettings( FeatureFlags.TIERED_REMOTE_INDEX_SETTING, FeatureFlags.REMOTE_STORE_MIGRATION_EXPERIMENTAL_SETTING, FeatureFlags.PLUGGABLE_CACHE_SETTING, - FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL_SETTING + FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL_SETTING, + FeatureFlags.STAR_TREE_INDEX_SETTING ); } diff --git a/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java b/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java index 1488f5d30b4ba..ca2c4dab6102b 100644 --- a/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java +++ b/server/src/main/java/org/opensearch/common/settings/IndexScopedSettings.java @@ -52,6 +52,7 @@ import org.opensearch.index.SearchSlowLog; import org.opensearch.index.TieredMergePolicyProvider; import org.opensearch.index.cache.bitset.BitsetFilterCache; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; import org.opensearch.index.engine.EngineConfig; import org.opensearch.index.fielddata.IndexFieldDataService; import org.opensearch.index.mapper.FieldMapper; @@ -239,6 +240,15 @@ public final class IndexScopedSettings extends AbstractScopedSettings { // Settings for concurrent segment search IndexSettings.INDEX_CONCURRENT_SEGMENT_SEARCH_SETTING, IndexSettings.ALLOW_DERIVED_FIELDS, + + // Settings for star tree index + StarTreeIndexSettings.STAR_TREE_DEFAULT_MAX_LEAF_DOCS, + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_SETTING, + StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING, + StarTreeIndexSettings.DEFAULT_METRICS_LIST, + StarTreeIndexSettings.DEFAULT_DATE_INTERVALS, + StarTreeIndexSettings.STAR_TREE_MAX_DATE_INTERVALS_SETTING, + // validate that built-in similarities don't get redefined Setting.groupSetting("index.similarity.", (s) -> { Map groups = s.getAsGroups(); diff --git a/server/src/main/java/org/opensearch/common/util/FeatureFlags.java b/server/src/main/java/org/opensearch/common/util/FeatureFlags.java index 6c6e2f2d600f0..ceb2559a0e16c 100644 --- a/server/src/main/java/org/opensearch/common/util/FeatureFlags.java +++ b/server/src/main/java/org/opensearch/common/util/FeatureFlags.java @@ -100,6 +100,13 @@ public class FeatureFlags { Property.NodeScope ); + /** + * Gates the functionality of star tree index, which improves the performance of search + * aggregations. + */ + public static final String STAR_TREE_INDEX = "opensearch.experimental.feature.composite_index.star_tree.enabled"; + public static final Setting STAR_TREE_INDEX_SETTING = Setting.boolSetting(STAR_TREE_INDEX, false, Property.NodeScope); + private static final List> ALL_FEATURE_FLAG_SETTINGS = List.of( REMOTE_STORE_MIGRATION_EXPERIMENTAL_SETTING, EXTENSIONS_SETTING, @@ -108,7 +115,8 @@ public class FeatureFlags { DATETIME_FORMATTER_CACHING_SETTING, TIERED_REMOTE_INDEX_SETTING, PLUGGABLE_CACHE_SETTING, - REMOTE_PUBLICATION_EXPERIMENTAL_SETTING + REMOTE_PUBLICATION_EXPERIMENTAL_SETTING, + STAR_TREE_INDEX_SETTING ); /** * Should store the settings from opensearch.yml. diff --git a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java index ef14adeb42b68..74abe9cd257b4 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java +++ b/server/src/main/java/org/opensearch/gateway/remote/RemoteClusterStateService.java @@ -356,7 +356,7 @@ public RemoteClusterStateManifestInfo writeIncrementalMetadata( && clusterState.getNodes().delta(previousClusterState.getNodes()).hasChanges(); final boolean updateClusterBlocks = isPublicationEnabled && !clusterState.blocks().equals(previousClusterState.blocks()); final boolean updateHashesOfConsistentSettings = isPublicationEnabled - || Metadata.isHashesOfConsistentSettingsEqual(previousClusterState.metadata(), clusterState.metadata()) == false; + && Metadata.isHashesOfConsistentSettingsEqual(previousClusterState.metadata(), clusterState.metadata()) == false; uploadedMetadataResults = writeMetadataInParallel( clusterState, @@ -476,7 +476,8 @@ public RemoteClusterStateManifestInfo writeIncrementalMetadata( return manifestDetails; } - private UploadedMetadataResults writeMetadataInParallel( + // package private for testing + UploadedMetadataResults writeMetadataInParallel( ClusterState clusterState, List indexToUpload, Map prevIndexMetadataByName, diff --git a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateCustoms.java b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateCustoms.java index f384908bc6b65..affbc7ba66cb8 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateCustoms.java +++ b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteClusterStateCustoms.java @@ -34,12 +34,12 @@ */ public class RemoteClusterStateCustoms extends AbstractRemoteWritableBlobEntity { public static final String CLUSTER_STATE_CUSTOM = "cluster-state-custom"; + public final ChecksumWritableBlobStoreFormat clusterStateCustomsFormat; private long stateVersion; private final String customType; private ClusterState.Custom custom; private final NamedWriteableRegistry namedWriteableRegistry; - private final ChecksumWritableBlobStoreFormat clusterStateCustomsFormat; public RemoteClusterStateCustoms( final ClusterState.Custom custom, diff --git a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteCustomMetadata.java b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteCustomMetadata.java index 4c7069ee8be9e..ec5dfbec820d4 100644 --- a/server/src/main/java/org/opensearch/gateway/remote/model/RemoteCustomMetadata.java +++ b/server/src/main/java/org/opensearch/gateway/remote/model/RemoteCustomMetadata.java @@ -12,6 +12,7 @@ import org.opensearch.common.io.Streams; import org.opensearch.common.remote.AbstractRemoteWritableBlobEntity; import org.opensearch.common.remote.BlobPathParameters; +import org.opensearch.core.common.io.stream.NamedWriteableAwareStreamInput; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; import org.opensearch.core.common.io.stream.StreamInput; import org.opensearch.core.compress.Compressor; @@ -122,6 +123,8 @@ public UploadedMetadata getUploadedMetadata() { public static Custom readFrom(StreamInput streamInput, NamedWriteableRegistry namedWriteableRegistry, String customType) throws IOException { - return namedWriteableRegistry.getReader(Custom.class, customType).read(streamInput); + try (StreamInput in = new NamedWriteableAwareStreamInput(streamInput, namedWriteableRegistry)) { + return namedWriteableRegistry.getReader(Custom.class, customType).read(in); + } } } diff --git a/server/src/main/java/org/opensearch/index/IndexModule.java b/server/src/main/java/org/opensearch/index/IndexModule.java index 4c494a6b35153..09b904394ee09 100644 --- a/server/src/main/java/org/opensearch/index/IndexModule.java +++ b/server/src/main/java/org/opensearch/index/IndexModule.java @@ -66,6 +66,7 @@ import org.opensearch.index.cache.query.DisabledQueryCache; import org.opensearch.index.cache.query.IndexQueryCache; import org.opensearch.index.cache.query.QueryCache; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.engine.Engine; import org.opensearch.index.engine.EngineConfigFactory; import org.opensearch.index.engine.EngineFactory; @@ -311,6 +312,7 @@ public Iterator> settings() { private final BooleanSupplier allowExpensiveQueries; private final Map recoveryStateFactories; private final FileCache fileCache; + private final CompositeIndexSettings compositeIndexSettings; /** * Construct the index module for the index with the specified index settings. The index module contains extension points for plugins @@ -330,7 +332,8 @@ public IndexModule( final BooleanSupplier allowExpensiveQueries, final IndexNameExpressionResolver expressionResolver, final Map recoveryStateFactories, - final FileCache fileCache + final FileCache fileCache, + final CompositeIndexSettings compositeIndexSettings ) { this.indexSettings = indexSettings; this.analysisRegistry = analysisRegistry; @@ -343,6 +346,7 @@ public IndexModule( this.expressionResolver = expressionResolver; this.recoveryStateFactories = recoveryStateFactories; this.fileCache = fileCache; + this.compositeIndexSettings = compositeIndexSettings; } public IndexModule( @@ -364,6 +368,7 @@ public IndexModule( allowExpensiveQueries, expressionResolver, recoveryStateFactories, + null, null ); } @@ -739,7 +744,8 @@ public IndexService newIndexService( clusterDefaultRefreshIntervalSupplier, recoverySettings, remoteStoreSettings, - fileCache + fileCache, + compositeIndexSettings ); success = true; return indexService; diff --git a/server/src/main/java/org/opensearch/index/IndexService.java b/server/src/main/java/org/opensearch/index/IndexService.java index a7849bcf80474..12b02d3dbd6fa 100644 --- a/server/src/main/java/org/opensearch/index/IndexService.java +++ b/server/src/main/java/org/opensearch/index/IndexService.java @@ -73,6 +73,7 @@ import org.opensearch.index.cache.IndexCache; import org.opensearch.index.cache.bitset.BitsetFilterCache; import org.opensearch.index.cache.query.QueryCache; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.engine.Engine; import org.opensearch.index.engine.EngineConfigFactory; import org.opensearch.index.engine.EngineFactory; @@ -192,6 +193,7 @@ public class IndexService extends AbstractIndexComponent implements IndicesClust private final RecoverySettings recoverySettings; private final RemoteStoreSettings remoteStoreSettings; private final FileCache fileCache; + private final CompositeIndexSettings compositeIndexSettings; public IndexService( IndexSettings indexSettings, @@ -228,7 +230,8 @@ public IndexService( Supplier clusterDefaultRefreshIntervalSupplier, RecoverySettings recoverySettings, RemoteStoreSettings remoteStoreSettings, - FileCache fileCache + FileCache fileCache, + CompositeIndexSettings compositeIndexSettings ) { super(indexSettings); this.allowExpensiveQueries = allowExpensiveQueries; @@ -306,6 +309,7 @@ public IndexService( this.translogFactorySupplier = translogFactorySupplier; this.recoverySettings = recoverySettings; this.remoteStoreSettings = remoteStoreSettings; + this.compositeIndexSettings = compositeIndexSettings; this.fileCache = fileCache; updateFsyncTaskIfNecessary(); } @@ -381,6 +385,7 @@ public IndexService( clusterDefaultRefreshIntervalSupplier, recoverySettings, remoteStoreSettings, + null, null ); } @@ -597,7 +602,21 @@ public synchronized IndexShard createShard( this.indexSettings.getRemoteStorePathStrategy() ); } - remoteStore = new Store(shardId, this.indexSettings, remoteDirectory, lock, Store.OnClose.EMPTY, path); + // When an instance of Store is created, a shardlock is created which is released on closing the instance of store. + // Currently, we create 2 instances of store for remote store backed indices: store and remoteStore. + // As there can be only one shardlock acquired for a given shard, the lock is shared between store and remoteStore. + // This creates an issue when we are deleting the index as it results in closing both store and remoteStore. + // Sample test failure: https://github.com/opensearch-project/OpenSearch/issues/13871 + // The following method provides ShardLock that is not maintained by NodeEnvironment. + // As part of https://github.com/opensearch-project/OpenSearch/issues/13075, we want to move away from keeping 2 + // store instances. + ShardLock remoteStoreLock = new ShardLock(shardId) { + @Override + protected void closeInternal() { + // Do nothing for shard lock on remote store + } + }; + remoteStore = new Store(shardId, this.indexSettings, remoteDirectory, remoteStoreLock, Store.OnClose.EMPTY, path); } else { // Disallow shards with remote store based settings to be created on non-remote store enabled nodes // Even though we have `RemoteStoreMigrationAllocationDecider` in place to prevent something like this from happening at the @@ -620,7 +639,6 @@ public synchronized IndexShard createShard( } else { directory = directoryFactory.newDirectory(this.indexSettings, path); } - store = new Store( shardId, this.indexSettings, @@ -1110,6 +1128,10 @@ private void rescheduleRefreshTasks() { } } + public CompositeIndexSettings getCompositeIndexSettings() { + return compositeIndexSettings; + } + /** * Shard Store Deleter Interface * diff --git a/server/src/main/java/org/opensearch/index/codec/CodecService.java b/server/src/main/java/org/opensearch/index/codec/CodecService.java index 67f38536a0d11..59fafdf1ba74e 100644 --- a/server/src/main/java/org/opensearch/index/codec/CodecService.java +++ b/server/src/main/java/org/opensearch/index/codec/CodecService.java @@ -39,6 +39,7 @@ import org.opensearch.common.Nullable; import org.opensearch.common.collect.MapBuilder; import org.opensearch.index.IndexSettings; +import org.opensearch.index.codec.composite.CompositeCodecFactory; import org.opensearch.index.mapper.MapperService; import java.util.Map; @@ -63,6 +64,7 @@ public class CodecService { * the raw unfiltered lucene default. useful for testing */ public static final String LUCENE_DEFAULT_CODEC = "lucene_default"; + private final CompositeCodecFactory compositeCodecFactory = new CompositeCodecFactory(); public CodecService(@Nullable MapperService mapperService, IndexSettings indexSettings, Logger logger) { final MapBuilder codecs = MapBuilder.newMapBuilder(); @@ -73,10 +75,16 @@ public CodecService(@Nullable MapperService mapperService, IndexSettings indexSe codecs.put(BEST_COMPRESSION_CODEC, new Lucene99Codec(Mode.BEST_COMPRESSION)); codecs.put(ZLIB, new Lucene99Codec(Mode.BEST_COMPRESSION)); } else { - codecs.put(DEFAULT_CODEC, new PerFieldMappingPostingFormatCodec(Mode.BEST_SPEED, mapperService, logger)); - codecs.put(LZ4, new PerFieldMappingPostingFormatCodec(Mode.BEST_SPEED, mapperService, logger)); - codecs.put(BEST_COMPRESSION_CODEC, new PerFieldMappingPostingFormatCodec(Mode.BEST_COMPRESSION, mapperService, logger)); - codecs.put(ZLIB, new PerFieldMappingPostingFormatCodec(Mode.BEST_COMPRESSION, mapperService, logger)); + // CompositeCodec still delegates to PerFieldMappingPostingFormatCodec + // We can still support all the compression codecs when composite index is present + if (mapperService.isCompositeIndexPresent()) { + codecs.putAll(compositeCodecFactory.getCompositeIndexCodecs(mapperService, logger)); + } else { + codecs.put(DEFAULT_CODEC, new PerFieldMappingPostingFormatCodec(Mode.BEST_SPEED, mapperService, logger)); + codecs.put(LZ4, new PerFieldMappingPostingFormatCodec(Mode.BEST_SPEED, mapperService, logger)); + codecs.put(BEST_COMPRESSION_CODEC, new PerFieldMappingPostingFormatCodec(Mode.BEST_COMPRESSION, mapperService, logger)); + codecs.put(ZLIB, new PerFieldMappingPostingFormatCodec(Mode.BEST_COMPRESSION, mapperService, logger)); + } } codecs.put(LUCENE_DEFAULT_CODEC, Codec.getDefault()); for (String codec : Codec.availableCodecs()) { diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesFormat.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesFormat.java new file mode 100644 index 0000000000000..ed70158997f9a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesFormat.java @@ -0,0 +1,94 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesFormat; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene90.Lucene90DocValuesFormat; +import org.apache.lucene.index.SegmentReadState; +import org.apache.lucene.index.SegmentWriteState; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.mapper.MapperService; + +import java.io.IOException; + +/** + * DocValues format to handle composite indices + * + * @opensearch.experimental + */ +@ExperimentalApi +public class Composite90DocValuesFormat extends DocValuesFormat { + /** + * Creates a new docvalues format. + * + *

The provided name will be written into the index segment in some configurations (such as + * when using {@code PerFieldDocValuesFormat}): in such configurations, for the segment to be read + * this class should be registered with Java's SPI mechanism (registered in META-INF/ of your jar + * file, etc). + */ + private final DocValuesFormat delegate; + private final MapperService mapperService; + + /** Data codec name for Composite Doc Values Format */ + public static final String DATA_CODEC_NAME = "Composite99FormatData"; + + /** Meta codec name for Composite Doc Values Format */ + public static final String META_CODEC_NAME = "Composite99FormatMeta"; + + /** Filename extension for the composite index data */ + public static final String DATA_EXTENSION = "sttd"; + + /** Filename extension for the composite index meta */ + public static final String META_EXTENSION = "sttm"; + + /** Data doc values codec name for Composite Doc Values Format */ + public static final String DATA_DOC_VALUES_CODEC = "Composite99DocValuesData"; + + /** Meta doc values codec name for Composite Doc Values Format */ + static final String META_DOC_VALUES_CODEC = "Composite99DocValuesMetadata"; + + /** Filename extension for the composite index data doc values */ + public static final String DATA_DOC_VALUES_EXTENSION = "sttddvm"; + + /** Filename extension for the composite index meta doc values */ + public static final String META_DOC_VALUES_EXTENSION = "sttmdvm"; + + /** Initial version for the Composite90DocValuesFormat */ + public static final int VERSION_START = 0; + + /** Current version for the Composite90DocValuesFormat */ + public static final int VERSION_CURRENT = VERSION_START; + + // needed for SPI + public Composite90DocValuesFormat() { + this(new Lucene90DocValuesFormat(), null); + } + + public Composite90DocValuesFormat(MapperService mapperService) { + this(new Lucene90DocValuesFormat(), mapperService); + } + + public Composite90DocValuesFormat(DocValuesFormat delegate, MapperService mapperService) { + super(delegate.getName()); + this.delegate = delegate; + this.mapperService = mapperService; + } + + @Override + public DocValuesConsumer fieldsConsumer(SegmentWriteState state) throws IOException { + return new Composite90DocValuesWriter(delegate.fieldsConsumer(state), state, mapperService); + } + + @Override + public DocValuesProducer fieldsProducer(SegmentReadState state) throws IOException { + return new Composite90DocValuesReader(delegate.fieldsProducer(state), state); + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesReader.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesReader.java new file mode 100644 index 0000000000000..d2a79ab328373 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesReader.java @@ -0,0 +1,287 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.CodecUtil; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene90.StarTree99DocValuesProducer; +import org.apache.lucene.index.BinaryDocValues; +import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexFileNames; +import org.apache.lucene.index.NumericDocValues; +import org.apache.lucene.index.SegmentReadState; +import org.apache.lucene.index.SortedDocValues; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.SortedSetDocValues; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.store.ChecksumIndexInput; +import org.apache.lucene.store.IndexInput; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.util.io.IOUtils; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; +import org.opensearch.index.compositeindex.CompositeIndexMetadata; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.MergeDimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricEntry; +import org.opensearch.index.compositeindex.datacube.startree.meta.StarTreeMetadata; +import org.opensearch.index.compositeindex.datacube.startree.node.OffHeapStarTree; +import org.opensearch.index.compositeindex.datacube.startree.node.StarTree; +import org.opensearch.index.compositeindex.datacube.startree.node.StarTreeNode; +import org.opensearch.index.compositeindex.datacube.startree.utils.StarTreeHelper; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.LinkedHashMap; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.concurrent.ConcurrentHashMap; + +/** + * Reader for star tree index and star tree doc values from the segments + * + * @opensearch.experimental + */ +@ExperimentalApi +public class Composite90DocValuesReader extends DocValuesProducer implements CompositeIndexReader { + private static final Logger logger = LogManager.getLogger(CompositeIndexMetadata.class); + + private final DocValuesProducer delegate; + private final IndexInput dataIn; + private final ChecksumIndexInput metaIn; + private final Map starTreeMap = new LinkedHashMap<>(); + private final Map compositeIndexMetadataMap = new LinkedHashMap<>(); + private final Map compositeDocValuesProducerMap = new LinkedHashMap<>(); + private final List compositeFieldInfos = new ArrayList<>(); + private final SegmentReadState readState; + + public Composite90DocValuesReader(DocValuesProducer producer, SegmentReadState readState) throws IOException { + this.delegate = producer; + this.readState = readState; + + String metaFileName = IndexFileNames.segmentFileName( + readState.segmentInfo.name, + readState.segmentSuffix, + Composite90DocValuesFormat.META_EXTENSION + ); + + String dataFileName = IndexFileNames.segmentFileName( + readState.segmentInfo.name, + readState.segmentSuffix, + Composite90DocValuesFormat.DATA_EXTENSION + ); + + boolean success = false; + try { + + dataIn = readState.directory.openInput(dataFileName, readState.context); + CodecUtil.checkIndexHeader( + dataIn, + Composite90DocValuesFormat.DATA_CODEC_NAME, + Composite90DocValuesFormat.VERSION_START, + Composite90DocValuesFormat.VERSION_CURRENT, + readState.segmentInfo.getId(), + readState.segmentSuffix + ); + + metaIn = readState.directory.openChecksumInput(metaFileName, readState.context); + Throwable priorE = null; + try { + CodecUtil.checkIndexHeader( + metaIn, + Composite90DocValuesFormat.META_CODEC_NAME, + Composite90DocValuesFormat.VERSION_START, + Composite90DocValuesFormat.VERSION_CURRENT, + readState.segmentInfo.getId(), + readState.segmentSuffix + ); + + while (true) { + long magicMarker = metaIn.readLong(); + + if (magicMarker == -1) { + logger.info("EOF reached for composite index metadata"); + break; + } else if (magicMarker < 0) { + throw new CorruptIndexException("Unknown token encountered: " + magicMarker, metaIn); + } + CompositeIndexMetadata compositeIndexMetadata = new CompositeIndexMetadata(metaIn, magicMarker); + String compositeFieldName = compositeIndexMetadata.getCompositeFieldName(); + compositeFieldInfos.add( + new CompositeIndexFieldInfo(compositeFieldName, compositeIndexMetadata.getCompositeFieldType()) + ); + switch (compositeIndexMetadata.getCompositeFieldType()) { + case STAR_TREE: + StarTreeMetadata starTreeMetadata = compositeIndexMetadata.getStarTreeMetadata(); + StarTree starTree = new OffHeapStarTree(dataIn, starTreeMetadata); + starTreeMap.put(compositeFieldName, starTree); + compositeIndexMetadataMap.put(compositeFieldName, compositeIndexMetadata); + + List dimensionFieldNumbers = starTreeMetadata.getDimensionFieldNumbers(); + List dimensions = new ArrayList<>(); + for (Integer fieldNumber : dimensionFieldNumbers) { + dimensions.add(readState.fieldInfos.fieldInfo(fieldNumber)); + } + + StarTree99DocValuesProducer starTreeDocValuesProducer = new StarTree99DocValuesProducer( + readState, + Composite90DocValuesFormat.DATA_DOC_VALUES_CODEC, + Composite90DocValuesFormat.DATA_DOC_VALUES_EXTENSION, + Composite90DocValuesFormat.META_DOC_VALUES_CODEC, + Composite90DocValuesFormat.META_DOC_VALUES_EXTENSION, + dimensions, + starTreeMetadata.getMetricEntries(), + compositeFieldName + ); + compositeDocValuesProducerMap.put(compositeFieldName, starTreeDocValuesProducer); + + break; + default: + throw new CorruptIndexException("Invalid composite field type found in the file", dataIn); + } + } + } catch (Throwable t) { + priorE = t; + } finally { + CodecUtil.checkFooter(metaIn, priorE); + } + success = true; + } finally { + if (success == false) { + IOUtils.closeWhileHandlingException(this); + } + } + } + + @Override + public NumericDocValues getNumeric(FieldInfo field) throws IOException { + return delegate.getNumeric(field); + } + + @Override + public BinaryDocValues getBinary(FieldInfo field) throws IOException { + return delegate.getBinary(field); + } + + @Override + public SortedDocValues getSorted(FieldInfo field) throws IOException { + return delegate.getSorted(field); + } + + @Override + public SortedNumericDocValues getSortedNumeric(FieldInfo field) throws IOException { + return delegate.getSortedNumeric(field); + } + + @Override + public SortedSetDocValues getSortedSet(FieldInfo field) throws IOException { + return delegate.getSortedSet(field); + } + + @Override + public void checkIntegrity() throws IOException { + delegate.checkIntegrity(); + CodecUtil.checksumEntireFile(metaIn); + CodecUtil.checksumEntireFile(dataIn); + } + + @Override + public void close() throws IOException { + delegate.close(); + starTreeMap.clear(); + compositeIndexMetadataMap.clear(); + compositeDocValuesProducerMap.clear(); + } + + @Override + public List getCompositeIndexFields() { + return compositeFieldInfos; + + } + + @Override + public CompositeIndexValues getCompositeIndexValues(CompositeIndexFieldInfo compositeIndexFieldInfo) throws IOException { + + switch (compositeIndexFieldInfo.getType()) { + case STAR_TREE: + CompositeIndexMetadata compositeIndexMetadata = compositeIndexMetadataMap.get(compositeIndexFieldInfo.getField()); + StarTreeMetadata starTreeMetadata = compositeIndexMetadata.getStarTreeMetadata(); + Set skipStarNodeCreationInDimsFieldNumbers = starTreeMetadata.getSkipStarNodeCreationInDims(); + Set skipStarNodeCreationInDims = new HashSet<>(); + for (Integer fieldNumber : skipStarNodeCreationInDimsFieldNumbers) { + skipStarNodeCreationInDims.add(readState.fieldInfos.fieldInfo(fieldNumber).getName()); + } + + List dimensionFieldNumbers = starTreeMetadata.getDimensionFieldNumbers(); + List dimensions = new ArrayList<>(); + List mergeDimensions = new ArrayList<>(); + for (Integer fieldNumber : dimensionFieldNumbers) { + dimensions.add(readState.fieldInfos.fieldInfo(fieldNumber).getName()); + mergeDimensions.add(new MergeDimension(readState.fieldInfos.fieldInfo(fieldNumber).name)); + } + + Map starTreeMetricMap = new ConcurrentHashMap<>(); + for (MetricEntry metricEntry : starTreeMetadata.getMetricEntries()) { + String metricName = metricEntry.getMetricName(); + + Metric metric = starTreeMetricMap.computeIfAbsent(metricName, field -> new Metric(field, new ArrayList<>())); + metric.getMetrics().add(metricEntry.getMetricStat()); + } + List starTreeMetrics = new ArrayList<>(starTreeMetricMap.values()); + + StarTreeField starTreeField = new StarTreeField( + compositeIndexMetadata.getCompositeFieldName(), + mergeDimensions, + starTreeMetrics, + new StarTreeFieldConfiguration( + starTreeMetadata.getMaxLeafDocs(), + skipStarNodeCreationInDims, + starTreeMetadata.getStarTreeBuildMode() + ) + ); + StarTreeNode rootNode = starTreeMap.get(compositeIndexFieldInfo.getField()).getRoot(); + StarTree99DocValuesProducer starTree99DocValuesProducer = (StarTree99DocValuesProducer) compositeDocValuesProducerMap.get( + compositeIndexMetadata.getCompositeFieldName() + ); + Map dimensionsDocIdSetIteratorMap = new LinkedHashMap<>(); + Map metricsDocIdSetIteratorMap = new LinkedHashMap<>(); + + for (String dimension : dimensions) { + dimensionsDocIdSetIteratorMap.put( + dimension, + starTree99DocValuesProducer.getSortedNumeric( + StarTreeHelper.fullFieldNameForStarTreeDimensionsDocValues(starTreeField.getName(), dimension) + ) + ); + } + + for (MetricEntry metricEntry : starTreeMetadata.getMetricEntries()) { + String metricFullName = StarTreeHelper.fullFieldNameForStarTreeMetricsDocValues( + starTreeField.getName(), + metricEntry.getMetricName(), + metricEntry.getMetricStat().getTypeName() + ); + metricsDocIdSetIteratorMap.put(metricFullName, starTree99DocValuesProducer.getSortedNumeric(metricFullName)); + } + + return new StarTreeValues(starTreeField, rootNode, dimensionsDocIdSetIteratorMap, metricsDocIdSetIteratorMap); + + default: + throw new CorruptIndexException("Unsupported composite index field type: ", compositeIndexFieldInfo.getType().getName()); + } + + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesWriter.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesWriter.java new file mode 100644 index 0000000000000..bf9884e4f2ad2 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite90DocValuesWriter.java @@ -0,0 +1,246 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.CodecUtil; +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene90.Composite99DocValuesConsumer; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexFileNames; +import org.apache.lucene.index.MergeState; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.store.IndexOutput; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.util.io.IOUtils; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.builder.StarTreesBuilder; +import org.opensearch.index.mapper.CompositeMappedFieldType; +import org.opensearch.index.mapper.MapperService; + +import java.io.IOException; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.List; +import java.util.Map; +import java.util.Set; + +/** + * This class write the star tree index and star tree doc values + * based on the doc values structures of the original index + * + * @opensearch.experimental + */ +@ExperimentalApi +public class Composite90DocValuesWriter extends DocValuesConsumer { + private final DocValuesConsumer delegate; + private final SegmentWriteState state; + private final MapperService mapperService; + private MergeState mergeState = null; + private final Set compositeMappedFieldTypes; + private final Set compositeFieldSet; + private final DocValuesConsumer composite99DocValuesConsumer; + + public IndexOutput dataOut; + public IndexOutput metaOut; + + private final Map fieldProducerMap = new HashMap<>(); + private static final Logger logger = LogManager.getLogger(Composite90DocValuesWriter.class); + + public Composite90DocValuesWriter(DocValuesConsumer delegate, SegmentWriteState segmentWriteState, MapperService mapperService) + throws IOException { + + boolean success = false; + try { + String dataFileName = IndexFileNames.segmentFileName( + segmentWriteState.segmentInfo.name, + segmentWriteState.segmentSuffix, + Composite90DocValuesFormat.DATA_EXTENSION + ); + dataOut = segmentWriteState.directory.createOutput(dataFileName, segmentWriteState.context); + CodecUtil.writeIndexHeader( + dataOut, + Composite90DocValuesFormat.DATA_CODEC_NAME, + Composite90DocValuesFormat.VERSION_CURRENT, + segmentWriteState.segmentInfo.getId(), + segmentWriteState.segmentSuffix + ); + + String metaFileName = IndexFileNames.segmentFileName( + segmentWriteState.segmentInfo.name, + segmentWriteState.segmentSuffix, + Composite90DocValuesFormat.META_EXTENSION + ); + metaOut = segmentWriteState.directory.createOutput(metaFileName, segmentWriteState.context); + CodecUtil.writeIndexHeader( + metaOut, + Composite90DocValuesFormat.META_CODEC_NAME, + Composite90DocValuesFormat.VERSION_CURRENT, + segmentWriteState.segmentInfo.getId(), + segmentWriteState.segmentSuffix + ); + + success = true; + } finally { + if (success == false) { + IOUtils.closeWhileHandlingException(this); + } + } + + this.delegate = delegate; + this.state = segmentWriteState; + this.mapperService = mapperService; + this.compositeMappedFieldTypes = mapperService.getCompositeFieldTypes(); + compositeFieldSet = new HashSet<>(); + for (CompositeMappedFieldType type : compositeMappedFieldTypes) { + compositeFieldSet.addAll(type.fields()); + } + this.composite99DocValuesConsumer = new Composite99DocValuesConsumer( + segmentWriteState, + Composite90DocValuesFormat.DATA_DOC_VALUES_CODEC, + Composite90DocValuesFormat.DATA_DOC_VALUES_EXTENSION, + Composite90DocValuesFormat.META_DOC_VALUES_CODEC, + Composite90DocValuesFormat.META_DOC_VALUES_EXTENSION + ); + } + + @Override + public void addNumericField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addNumericField(field, valuesProducer); + } + + @Override + public void addBinaryField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addBinaryField(field, valuesProducer); + } + + @Override + public void addSortedField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addSortedField(field, valuesProducer); + } + + @Override + public void addSortedNumericField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addSortedNumericField(field, valuesProducer); + // Perform this only during flush flow + if (mergeState == null) { + createCompositeIndicesIfPossible(valuesProducer, field); + } + } + + @Override + public void addSortedSetField(FieldInfo field, DocValuesProducer valuesProducer) throws IOException { + delegate.addSortedSetField(field, valuesProducer); + } + + @Override + public void close() throws IOException { + delegate.close(); + boolean success = false; + try { + if (metaOut != null) { + metaOut.writeLong(-1); // write EOF marker + CodecUtil.writeFooter(metaOut); // write checksum + } + if (dataOut != null) { + CodecUtil.writeFooter(dataOut); // write checksum + } + success = true; + } finally { + if (success) { + IOUtils.close(dataOut, metaOut); + } else { + IOUtils.closeWhileHandlingException(dataOut, metaOut); + } + metaOut = dataOut = null; + } + } + + private void createCompositeIndicesIfPossible(DocValuesProducer valuesProducer, FieldInfo field) throws IOException { + if (compositeFieldSet.isEmpty()) return; + if (compositeFieldSet.contains(field.name)) { + fieldProducerMap.put(field.name, valuesProducer); + compositeFieldSet.remove(field.name); + } + // we have all the required fields to build composite fields + if (compositeFieldSet.isEmpty()) { + for (CompositeMappedFieldType mappedType : compositeMappedFieldTypes) { + if (mappedType.getCompositeIndexType().equals(CompositeMappedFieldType.CompositeFieldType.STAR_TREE)) { + StarTreesBuilder starTreesBuilder = new StarTreesBuilder(state, mapperService); + starTreesBuilder.build(metaOut, dataOut, fieldProducerMap, composite99DocValuesConsumer); + } + } + } + } + + @Override + public void merge(MergeState mergeState) throws IOException { + // TODO : check if class variable will cause concurrency issues + this.mergeState = mergeState; + super.merge(mergeState); + mergeCompositeFields(mergeState); + } + + /** + * Merges composite fields from multiple segments + * + * @param mergeState merge state + */ + private void mergeCompositeFields(MergeState mergeState) throws IOException { + mergeStarTreeFields(mergeState); + } + + /** + * Merges star tree data fields from multiple segments + * + * @param mergeState merge state + */ + private void mergeStarTreeFields(MergeState mergeState) throws IOException { + Map> starTreeSubsPerField = new HashMap<>(); + Map starTreeFieldMap = new HashMap<>(); + for (int i = 0; i < mergeState.docValuesProducers.length; i++) { + CompositeIndexReader reader = null; + if (mergeState.docValuesProducers[i] == null) { + continue; + } + if (mergeState.docValuesProducers[i] instanceof CompositeIndexReader) { + reader = (CompositeIndexReader) mergeState.docValuesProducers[i]; + } else { + continue; + } + + List compositeFieldInfo = reader.getCompositeIndexFields(); + for (CompositeIndexFieldInfo fieldInfo : compositeFieldInfo) { + if (fieldInfo.getType().equals(CompositeMappedFieldType.CompositeFieldType.STAR_TREE)) { + CompositeIndexValues compositeIndexValues = reader.getCompositeIndexValues(fieldInfo); + if (compositeIndexValues instanceof StarTreeValues) { + List fieldsList = starTreeSubsPerField.getOrDefault(fieldInfo.getField(), Collections.emptyList()); + if (!starTreeFieldMap.containsKey(fieldInfo.getField())) { + starTreeFieldMap.put(fieldInfo.getField(), ((StarTreeValues) compositeIndexValues).getStarTreeField()); + } + // assert star tree configuration is same across segments + else { + assert starTreeFieldMap.get(fieldInfo.getField()) + .equals(((StarTreeValues) compositeIndexValues).getStarTreeField()); + logger.error("Star tree configuration is not same for segments during merge"); + } + fieldsList.add((StarTreeValues) compositeIndexValues); + starTreeSubsPerField.put(fieldInfo.getField(), fieldsList); + } + } + } + } + final StarTreesBuilder starTreesBuilder = new StarTreesBuilder(state, mapperService); + starTreesBuilder.buildDuringMerge(metaOut, dataOut, starTreeFieldMap, starTreeSubsPerField, composite99DocValuesConsumer); + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/Composite99Codec.java b/server/src/main/java/org/opensearch/index/codec/composite/Composite99Codec.java new file mode 100644 index 0000000000000..2c235cf8c4e79 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/Composite99Codec.java @@ -0,0 +1,56 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.Codec; +import org.apache.lucene.codecs.DocValuesFormat; +import org.apache.lucene.codecs.FilterCodec; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.PerFieldMappingPostingFormatCodec; +import org.opensearch.index.mapper.MapperService; + +/** + * Extends the Codec to support new file formats for composite indices eg: star tree index + * based on the mappings. + * + * @opensearch.experimental + */ +@ExperimentalApi +public class Composite99Codec extends FilterCodec { + public static final String COMPOSITE_INDEX_CODEC_NAME = "Composite99Codec"; + private final MapperService mapperService; + + public Composite99Codec() { + this(COMPOSITE_INDEX_CODEC_NAME, new Lucene99Codec(), null); + } + + public Composite99Codec(Lucene99Codec.Mode compressionMode, MapperService mapperService, Logger logger) { + this(COMPOSITE_INDEX_CODEC_NAME, new PerFieldMappingPostingFormatCodec(compressionMode, mapperService, logger), mapperService); + } + + /** + * Sole constructor. When subclassing this codec, create a no-arg ctor and pass the delegate codec and a unique name to + * this ctor. + * + * @param name name of the codec + * @param delegate codec delegate + * @param mapperService mapper service instance + */ + protected Composite99Codec(String name, Codec delegate, MapperService mapperService) { + super(name, delegate); + this.mapperService = mapperService; + } + + @Override + public DocValuesFormat docValuesFormat() { + return new Composite90DocValuesFormat(mapperService); + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/CompositeCodecFactory.java b/server/src/main/java/org/opensearch/index/codec/composite/CompositeCodecFactory.java new file mode 100644 index 0000000000000..3acedc6a27d7f --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/CompositeCodecFactory.java @@ -0,0 +1,42 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.Codec; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.mapper.MapperService; + +import java.util.HashMap; +import java.util.Map; + +import static org.opensearch.index.codec.CodecService.BEST_COMPRESSION_CODEC; +import static org.opensearch.index.codec.CodecService.DEFAULT_CODEC; +import static org.opensearch.index.codec.CodecService.LZ4; +import static org.opensearch.index.codec.CodecService.ZLIB; + +/** + * Factory class to return the latest composite codec for all the modes + * + * @opensearch.experimental + */ +@ExperimentalApi +public class CompositeCodecFactory { + public CompositeCodecFactory() {} + + public Map getCompositeIndexCodecs(MapperService mapperService, Logger logger) { + Map codecs = new HashMap<>(); + codecs.put(DEFAULT_CODEC, new Composite99Codec(Lucene99Codec.Mode.BEST_SPEED, mapperService, logger)); + codecs.put(LZ4, new Composite99Codec(Lucene99Codec.Mode.BEST_SPEED, mapperService, logger)); + codecs.put(BEST_COMPRESSION_CODEC, new Composite99Codec(Lucene99Codec.Mode.BEST_COMPRESSION, mapperService, logger)); + codecs.put(ZLIB, new Composite99Codec(Lucene99Codec.Mode.BEST_COMPRESSION, mapperService, logger)); + return codecs; + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexFieldInfo.java b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexFieldInfo.java new file mode 100644 index 0000000000000..4ac5bcf1b605f --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexFieldInfo.java @@ -0,0 +1,36 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.mapper.CompositeMappedFieldType; + +/** + * Field info details of composite index fields + * + * @opensearch.experimental + */ +@ExperimentalApi +public class CompositeIndexFieldInfo { + private final String field; + private final CompositeMappedFieldType.CompositeFieldType type; + + public CompositeIndexFieldInfo(String field, CompositeMappedFieldType.CompositeFieldType type) { + this.field = field; + this.type = type; + } + + public String getField() { + return field; + } + + public CompositeMappedFieldType.CompositeFieldType getType() { + return type; + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java new file mode 100644 index 0000000000000..a159b0619bcbb --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexReader.java @@ -0,0 +1,33 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.io.IOException; +import java.util.List; + +/** + * Interface that abstracts the functionality to read composite index structures from the segment + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface CompositeIndexReader { + /** + * Get list of composite index fields from the segment + * + */ + List getCompositeIndexFields(); + + /** + * Get composite index values based on the field name and the field type + */ + CompositeIndexValues getCompositeIndexValues(CompositeIndexFieldInfo fieldInfo) throws IOException; +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexValues.java b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexValues.java new file mode 100644 index 0000000000000..f8848aceab343 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/CompositeIndexValues.java @@ -0,0 +1,21 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite; + +import org.opensearch.common.annotation.ExperimentalApi; + +/** + * Interface for composite index values + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface CompositeIndexValues { + CompositeIndexValues getValues(); +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java new file mode 100644 index 0000000000000..baed90273d311 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeValues.java @@ -0,0 +1,63 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite.datacube.startree; + +import org.apache.lucene.search.DocIdSetIterator; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.composite.CompositeIndexValues; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.node.StarTreeNode; + +import java.util.Map; + +/** + * Concrete class that holds the star tree associated values from the segment + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeValues implements CompositeIndexValues { + private final StarTreeField starTreeField; + private final StarTreeNode root; + private final Map dimensionDocValuesIteratorMap; + private final Map metricDocValuesIteratorMap; + + public StarTreeValues( + StarTreeField starTreeField, + StarTreeNode root, + Map dimensionDocValuesIteratorMap, + Map metricDocValuesIteratorMap + ) { + this.starTreeField = starTreeField; + this.root = root; + this.dimensionDocValuesIteratorMap = dimensionDocValuesIteratorMap; + this.metricDocValuesIteratorMap = metricDocValuesIteratorMap; + } + + @Override + public CompositeIndexValues getValues() { + return this; + } + + public StarTreeField getStarTreeField() { + return starTreeField; + } + + public StarTreeNode getRoot() { + return root; + } + + public Map getDimensionDocValuesIteratorMap() { + return dimensionDocValuesIteratorMap; + } + + public Map getMetricDocValuesIteratorMap() { + return metricDocValuesIteratorMap; + } +} diff --git a/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/package-info.java b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/package-info.java new file mode 100644 index 0000000000000..67808ad51289a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/datacube/startree/package-info.java @@ -0,0 +1,12 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * classes responsible for handling all star tree structures and operations as part of codec + */ +package org.opensearch.index.codec.composite.datacube.startree; diff --git a/server/src/main/java/org/opensearch/index/codec/composite/package-info.java b/server/src/main/java/org/opensearch/index/codec/composite/package-info.java new file mode 100644 index 0000000000000..5d15e99c00975 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/codec/composite/package-info.java @@ -0,0 +1,12 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * classes responsible for handling all composite index codecs and operations + */ +package org.opensearch.index.codec.composite; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexConstants.java b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexConstants.java new file mode 100644 index 0000000000000..defb4a26de260 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexConstants.java @@ -0,0 +1,26 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex; + +/** + * This class contains constants used in the Composite Index implementation. + */ +public class CompositeIndexConstants { + + /** + * The magic marker value used for sanity checks in the Composite Index implementation. + */ + public static final long MAGIC_MARKER = 0xC0950513F1E1DL; // Composite Field + + /** + * The version of the Composite Index implementation. + */ + public static final int VERSION = 1; + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexMetadata.java b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexMetadata.java new file mode 100644 index 0000000000000..a3e3bbc687566 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexMetadata.java @@ -0,0 +1,95 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.store.IndexInput; +import org.opensearch.index.compositeindex.datacube.startree.meta.StarTreeMetadata; +import org.opensearch.index.mapper.CompositeMappedFieldType; + +import java.io.IOException; + +import static org.opensearch.index.compositeindex.CompositeIndexConstants.MAGIC_MARKER; +import static org.opensearch.index.compositeindex.CompositeIndexConstants.VERSION; + +/** + * This class represents the metadata of a Composite Index, which includes information about + * the composite field name, type, and the specific metadata for the type of composite field + * (e.g., StarTree metadata). + * + * @opensearch.experimental + */ +public class CompositeIndexMetadata { + + private static final Logger logger = LogManager.getLogger(CompositeIndexMetadata.class); + private final String compositeFieldName; + private final CompositeMappedFieldType.CompositeFieldType compositeFieldType; + private final StarTreeMetadata starTreeMetadata; + + /** + * Constructs a CompositeIndexMetadata object from the provided IndexInput and magic marker. + * + * @param meta the IndexInput containing the metadata + * @param magicMarker the magic marker value + * @throws IOException if an I/O error occurs while reading the metadata + */ + public CompositeIndexMetadata(IndexInput meta, long magicMarker) throws IOException { + if (MAGIC_MARKER != magicMarker) { + logger.error("Invalid composite field magic marker"); + throw new IOException("Invalid composite field magic marker"); + } + int version = meta.readInt(); + if (VERSION != version) { + logger.error("Invalid composite field version"); + throw new IOException("Invalid composite field version"); + } + + compositeFieldName = meta.readString(); + compositeFieldType = CompositeMappedFieldType.CompositeFieldType.fromName(meta.readString()); + + switch (compositeFieldType) { + // support for type of composite fields can be added in the future. + case STAR_TREE: + starTreeMetadata = new StarTreeMetadata(meta, compositeFieldName, compositeFieldType.getName()); + break; + default: + throw new CorruptIndexException("Invalid composite field type present in the file", meta); + } + + } + + /** + * Returns the star-tree metadata. + * + * @return the StarTreeMetadata + */ + public StarTreeMetadata getStarTreeMetadata() { + return starTreeMetadata; + } + + /** + * Returns the name of the composite field. + * + * @return the composite field name + */ + public String getCompositeFieldName() { + return compositeFieldName; + } + + /** + * Returns the type of the composite field. + * + * @return the composite field type + */ + public CompositeMappedFieldType.CompositeFieldType getCompositeFieldType() { + return compositeFieldType; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexSettings.java b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexSettings.java new file mode 100644 index 0000000000000..014dd22426a10 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexSettings.java @@ -0,0 +1,55 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Setting; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; + +/** + * Cluster level settings for composite indices + * + * @opensearch.experimental + */ +@ExperimentalApi +public class CompositeIndexSettings { + public static final Setting STAR_TREE_INDEX_ENABLED_SETTING = Setting.boolSetting( + "indices.composite_index.star_tree.enabled", + false, + value -> { + if (FeatureFlags.isEnabled(FeatureFlags.STAR_TREE_INDEX_SETTING) == false && value == true) { + throw new IllegalArgumentException( + "star tree index is under an experimental feature and can be activated only by enabling " + + FeatureFlags.STAR_TREE_INDEX_SETTING.getKey() + + " feature flag in the JVM options" + ); + } + }, + Setting.Property.NodeScope, + Setting.Property.Dynamic + ); + + private volatile boolean starTreeIndexCreationEnabled; + + public CompositeIndexSettings(Settings settings, ClusterSettings clusterSettings) { + this.starTreeIndexCreationEnabled = STAR_TREE_INDEX_ENABLED_SETTING.get(settings); + clusterSettings.addSettingsUpdateConsumer(STAR_TREE_INDEX_ENABLED_SETTING, this::starTreeIndexCreationEnabled); + + } + + private void starTreeIndexCreationEnabled(boolean value) { + this.starTreeIndexCreationEnabled = value; + } + + public boolean isStarTreeIndexCreationEnabled() { + return starTreeIndexCreationEnabled; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexValidator.java b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexValidator.java new file mode 100644 index 0000000000000..995352e3ce6a5 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/CompositeIndexValidator.java @@ -0,0 +1,46 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.IndexSettings; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeValidator; +import org.opensearch.index.mapper.MapperService; + +import java.util.Locale; + +/** + * Validation for composite indices as part of mappings + * + * @opensearch.experimental + */ +@ExperimentalApi +public class CompositeIndexValidator { + + public static void validate(MapperService mapperService, CompositeIndexSettings compositeIndexSettings, IndexSettings indexSettings) { + StarTreeValidator.validate(mapperService, compositeIndexSettings, indexSettings); + } + + public static void validate( + MapperService mapperService, + CompositeIndexSettings compositeIndexSettings, + IndexSettings indexSettings, + boolean isCompositeFieldPresent + ) { + if (!isCompositeFieldPresent && mapperService.isCompositeIndexPresent()) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Composite fields must be specified during index creation, addition of new composite fields during update is not supported" + ) + ); + } + StarTreeValidator.validate(mapperService, compositeIndexSettings, indexSettings); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/DateDimension.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/DateDimension.java new file mode 100644 index 0000000000000..074016db2aed7 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/DateDimension.java @@ -0,0 +1,72 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.Rounding; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.mapper.CompositeDataCubeFieldType; + +import java.io.IOException; +import java.util.List; +import java.util.Objects; + +/** + * Date dimension class + * + * @opensearch.experimental + */ +@ExperimentalApi +public class DateDimension implements Dimension { + private final List calendarIntervals; + public static final String CALENDAR_INTERVALS = "calendar_intervals"; + public static final String DATE = "date"; + private final String field; + + public DateDimension(String field, List calendarIntervals) { + this.field = field; + this.calendarIntervals = calendarIntervals; + } + + public List getIntervals() { + return calendarIntervals; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(CompositeDataCubeFieldType.NAME, this.getField()); + builder.field(CompositeDataCubeFieldType.TYPE, DATE); + builder.startArray(CALENDAR_INTERVALS); + for (Rounding.DateTimeUnit interval : calendarIntervals) { + builder.value(interval.shortName()); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + DateDimension that = (DateDimension) o; + return Objects.equals(field, that.getField()) && Objects.equals(calendarIntervals, that.calendarIntervals); + } + + @Override + public int hashCode() { + return Objects.hash(field, calendarIntervals); + } + + @Override + public String getField() { + return field; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/Dimension.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/Dimension.java new file mode 100644 index 0000000000000..0151a474579be --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/Dimension.java @@ -0,0 +1,22 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.ToXContent; + +/** + * Base interface for data-cube dimensions + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface Dimension extends ToXContent { + String getField(); +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/DimensionFactory.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/DimensionFactory.java new file mode 100644 index 0000000000000..6a09e947217f5 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/DimensionFactory.java @@ -0,0 +1,99 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.Rounding; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.xcontent.support.XContentMapValues; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; +import org.opensearch.index.mapper.DateFieldMapper; +import org.opensearch.index.mapper.Mapper; +import org.opensearch.index.mapper.NumberFieldMapper; + +import java.util.ArrayList; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.stream.Collectors; + +import static org.opensearch.index.compositeindex.datacube.DateDimension.CALENDAR_INTERVALS; + +/** + * Dimension factory class mainly used to parse and create dimension from the mappings + * + * @opensearch.experimental + */ +@ExperimentalApi +public class DimensionFactory { + public static Dimension parseAndCreateDimension( + String name, + String type, + Map dimensionMap, + Mapper.TypeParser.ParserContext c + ) { + switch (type) { + case DateDimension.DATE: + return parseAndCreateDateDimension(name, dimensionMap, c); + case NumericDimension.NUMERIC: + return new NumericDimension(name); + default: + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unsupported field type associated with dimension [%s] as part of star tree field", name) + ); + } + } + + public static Dimension parseAndCreateDimension( + String name, + Mapper.Builder builder, + Map dimensionMap, + Mapper.TypeParser.ParserContext c + ) { + if (builder instanceof DateFieldMapper.Builder) { + return parseAndCreateDateDimension(name, dimensionMap, c); + } else if (builder instanceof NumberFieldMapper.Builder) { + return new NumericDimension(name); + } + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unsupported field type associated with star tree dimension [%s]", name) + ); + } + + private static DateDimension parseAndCreateDateDimension( + String name, + Map dimensionMap, + Mapper.TypeParser.ParserContext c + ) { + List calendarIntervals = new ArrayList<>(); + List intervalStrings = XContentMapValues.extractRawValues(CALENDAR_INTERVALS, dimensionMap) + .stream() + .map(Object::toString) + .collect(Collectors.toList()); + if (intervalStrings == null || intervalStrings.isEmpty()) { + calendarIntervals = StarTreeIndexSettings.DEFAULT_DATE_INTERVALS.get(c.getSettings()); + } else { + if (intervalStrings.size() > StarTreeIndexSettings.STAR_TREE_MAX_DATE_INTERVALS_SETTING.get(c.getSettings())) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "At most [%s] calendar intervals are allowed in dimension [%s]", + StarTreeIndexSettings.STAR_TREE_MAX_DATE_INTERVALS_SETTING.get(c.getSettings()), + name + ) + ); + } + for (String interval : intervalStrings) { + calendarIntervals.add(StarTreeIndexSettings.getTimeUnit(interval)); + } + calendarIntervals = new ArrayList<>(calendarIntervals); + } + dimensionMap.remove(CALENDAR_INTERVALS); + return new DateDimension(name, calendarIntervals); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/MergeDimension.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/MergeDimension.java new file mode 100644 index 0000000000000..1e15cae2e0029 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/MergeDimension.java @@ -0,0 +1,56 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.core.xcontent.ToXContent; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.mapper.CompositeDataCubeFieldType; + +import java.io.IOException; +import java.util.Objects; + +/** + * Composite index merge dimension class + * + * @opensearch.experimental + */ +public class MergeDimension implements Dimension { + public static final String MERGE = "merge"; + private final String field; + + public MergeDimension(String field) { + this.field = field; + } + + public String getField() { + return field; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, ToXContent.Params params) throws IOException { + builder.startObject(); + builder.field(CompositeDataCubeFieldType.NAME, field); + builder.field(CompositeDataCubeFieldType.TYPE, MERGE); + builder.endObject(); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + MergeDimension dimension = (MergeDimension) o; + return Objects.equals(field, dimension.getField()); + } + + @Override + public int hashCode() { + return Objects.hash(field); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/Metric.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/Metric.java new file mode 100644 index 0000000000000..9accb0201170a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/Metric.java @@ -0,0 +1,65 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.ToXContent; +import org.opensearch.core.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.List; +import java.util.Objects; + +/** + * Holds details of metrics field as part of composite field + */ +@ExperimentalApi +public class Metric implements ToXContent { + private final String field; + private final List metrics; + + public Metric(String field, List metrics) { + this.field = field; + this.metrics = metrics; + } + + public String getField() { + return field; + } + + public List getMetrics() { + return metrics; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field("name", field); + builder.startArray("stats"); + for (MetricStat metricType : metrics) { + builder.value(metricType.getTypeName()); + } + builder.endArray(); + builder.endObject(); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + Metric metric = (Metric) o; + return Objects.equals(field, metric.field) && Objects.equals(metrics, metric.metrics); + } + + @Override + public int hashCode() { + return Objects.hash(field, metrics); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/MetricStat.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/MetricStat.java new file mode 100644 index 0000000000000..fbde296b15f7e --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/MetricStat.java @@ -0,0 +1,44 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.annotation.ExperimentalApi; + +/** + * Supported metric types for composite index + * + * @opensearch.experimental + */ +@ExperimentalApi +public enum MetricStat { + COUNT("count"), + AVG("avg"), + SUM("sum"), + MIN("min"), + MAX("max"); + + private final String typeName; + + MetricStat(String typeName) { + this.typeName = typeName; + } + + public String getTypeName() { + return typeName; + } + + public static MetricStat fromTypeName(String typeName) { + for (MetricStat metric : MetricStat.values()) { + if (metric.getTypeName().equalsIgnoreCase(typeName)) { + return metric; + } + } + throw new IllegalArgumentException("Invalid metric stat: " + typeName); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/NumericDimension.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/NumericDimension.java new file mode 100644 index 0000000000000..9c25ef5b25503 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/NumericDimension.java @@ -0,0 +1,57 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.mapper.CompositeDataCubeFieldType; + +import java.io.IOException; +import java.util.Objects; + +/** + * Composite index numeric dimension class + * + * @opensearch.experimental + */ +@ExperimentalApi +public class NumericDimension implements Dimension { + public static final String NUMERIC = "numeric"; + private final String field; + + public NumericDimension(String field) { + this.field = field; + } + + public String getField() { + return field; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field(CompositeDataCubeFieldType.NAME, field); + builder.field(CompositeDataCubeFieldType.TYPE, NUMERIC); + builder.endObject(); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + NumericDimension dimension = (NumericDimension) o; + return Objects.equals(field, dimension.getField()); + } + + @Override + public int hashCode() { + return Objects.hash(field); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/package-info.java new file mode 100644 index 0000000000000..320876ea937bf --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/package-info.java @@ -0,0 +1,11 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +/** + * Core classes for handling data cube indices such as star tree index. + */ +package org.opensearch.index.compositeindex.datacube; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeDocument.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeDocument.java new file mode 100644 index 0000000000000..0ce2b3a5cdac5 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeDocument.java @@ -0,0 +1,34 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.util.Arrays; + +/** + * Star tree document + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeDocument { + public final Long[] dimensions; + public final Object[] metrics; + + public StarTreeDocument(Long[] dimensions, Object[] metrics) { + this.dimensions = dimensions; + this.metrics = metrics; + } + + @Override + public String toString() { + return Arrays.toString(dimensions) + " | " + Arrays.toString(metrics); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeField.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeField.java new file mode 100644 index 0000000000000..922ddcbea4fe2 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeField.java @@ -0,0 +1,94 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.ToXContent; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; + +import java.io.IOException; +import java.util.List; +import java.util.Objects; + +/** + * Star tree field which contains dimensions, metrics and specs + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeField implements ToXContent { + private final String name; + private final List dimensionsOrder; + private final List metrics; + private final StarTreeFieldConfiguration starTreeConfig; + + public StarTreeField(String name, List dimensions, List metrics, StarTreeFieldConfiguration starTreeConfig) { + this.name = name; + this.dimensionsOrder = dimensions; + this.metrics = metrics; + this.starTreeConfig = starTreeConfig; + } + + public String getName() { + return name; + } + + public List getDimensionsOrder() { + return dimensionsOrder; + } + + public List getMetrics() { + return metrics; + } + + public StarTreeFieldConfiguration getStarTreeConfig() { + return starTreeConfig; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + builder.startObject(); + builder.field("name", name); + if (dimensionsOrder != null && !dimensionsOrder.isEmpty()) { + builder.startArray("ordered_dimensions"); + for (Dimension dimension : dimensionsOrder) { + dimension.toXContent(builder, params); + } + builder.endArray(); + } + if (metrics != null && !metrics.isEmpty()) { + builder.startArray("metrics"); + for (Metric metric : metrics) { + metric.toXContent(builder, params); + } + builder.endArray(); + } + starTreeConfig.toXContent(builder, params); + builder.endObject(); + return builder; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + StarTreeField that = (StarTreeField) o; + return Objects.equals(name, that.name) + && Objects.equals(dimensionsOrder, that.dimensionsOrder) + && Objects.equals(metrics, that.metrics) + && Objects.equals(starTreeConfig, that.starTreeConfig); + } + + @Override + public int hashCode() { + return Objects.hash(name, dimensionsOrder, metrics, starTreeConfig); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeFieldConfiguration.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeFieldConfiguration.java new file mode 100644 index 0000000000000..755c064c2c60a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeFieldConfiguration.java @@ -0,0 +1,108 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.core.xcontent.ToXContent; +import org.opensearch.core.xcontent.XContentBuilder; + +import java.io.IOException; +import java.util.Locale; +import java.util.Objects; +import java.util.Set; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * Star tree index specific configuration + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeFieldConfiguration implements ToXContent { + + private final AtomicInteger maxLeafDocs = new AtomicInteger(); + private final Set skipStarNodeCreationInDims; + private final StarTreeBuildMode buildMode; + + public StarTreeFieldConfiguration(int maxLeafDocs, Set skipStarNodeCreationInDims, StarTreeBuildMode buildMode) { + this.maxLeafDocs.set(maxLeafDocs); + this.skipStarNodeCreationInDims = skipStarNodeCreationInDims; + this.buildMode = buildMode; + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + // build mode is internal and not part of user mappings config, hence not added as part of toXContent + builder.field("max_leaf_docs", maxLeafDocs.get()); + builder.startArray("skip_star_node_creation_for_dimensions"); + for (String dim : skipStarNodeCreationInDims) { + builder.value(dim); + } + builder.endArray(); + return builder; + } + + /** + * Star tree build mode using which sorting and aggregations are performed during index creation. + * + * @opensearch.experimental + */ + @ExperimentalApi + public enum StarTreeBuildMode { + // TODO : remove onheap support unless this proves useful + ON_HEAP("onheap"), + OFF_HEAP("offheap"); + + private final String typeName; + + StarTreeBuildMode(String typeName) { + this.typeName = typeName; + } + + public String getTypeName() { + return typeName; + } + + public static StarTreeBuildMode fromTypeName(String typeName) { + for (StarTreeBuildMode starTreeBuildMode : StarTreeBuildMode.values()) { + if (starTreeBuildMode.getTypeName().equalsIgnoreCase(typeName)) { + return starTreeBuildMode; + } + } + throw new IllegalArgumentException(String.format(Locale.ROOT, "Invalid star tree build mode: [%s] ", typeName)); + } + } + + public int maxLeafDocs() { + return maxLeafDocs.get(); + } + + public StarTreeBuildMode getBuildMode() { + return buildMode; + } + + public Set getSkipStarNodeCreationInDims() { + return skipStarNodeCreationInDims; + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + StarTreeFieldConfiguration that = (StarTreeFieldConfiguration) o; + return Objects.equals(maxLeafDocs.get(), that.maxLeafDocs.get()) + && Objects.equals(skipStarNodeCreationInDims, that.skipStarNodeCreationInDims) + && buildMode == that.buildMode; + } + + @Override + public int hashCode() { + return Objects.hash(maxLeafDocs.get(), skipStarNodeCreationInDims, buildMode); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeIndexSettings.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeIndexSettings.java new file mode 100644 index 0000000000000..a2ac545be3cc9 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeIndexSettings.java @@ -0,0 +1,116 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.Rounding; +import org.opensearch.common.settings.Setting; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.search.aggregations.bucket.histogram.DateHistogramAggregationBuilder; + +import java.util.Arrays; +import java.util.List; + +/** + * Index settings for star tree fields. The settings are final as right now + * there is no support for update of star tree mapping. + * + * @opensearch.experimental + */ +public class StarTreeIndexSettings { + + public static int STAR_TREE_MAX_DIMENSIONS_DEFAULT = 10; + /** + * This setting determines the max number of star tree fields that can be part of composite index mapping. For each + * star tree field, we will generate associated star tree index. + */ + public static final Setting STAR_TREE_MAX_FIELDS_SETTING = Setting.intSetting( + "index.composite_index.star_tree.max_fields", + 1, + 1, + 1, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * This setting determines the max number of dimensions that can be part of star tree index field. Number of + * dimensions and associated cardinality has direct effect of star tree index size and query performance. + */ + public static final Setting STAR_TREE_MAX_DIMENSIONS_SETTING = Setting.intSetting( + "index.composite_index.star_tree.field.max_dimensions", + STAR_TREE_MAX_DIMENSIONS_DEFAULT, + 2, + 10, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * This setting determines the max number of date intervals that can be part of star tree date field. + */ + public static final Setting STAR_TREE_MAX_DATE_INTERVALS_SETTING = Setting.intSetting( + "index.composite_index.star_tree.field.max_date_intervals", + 3, + 1, + 3, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * This setting configures the default "maxLeafDocs" setting of star tree. This affects both query performance and + * star tree index size. Lesser the leaves, better the query latency but higher storage size and vice versa + *

+ * We can remove this later or change it to an enum based constant setting. + * + * @opensearch.experimental + */ + public static final Setting STAR_TREE_DEFAULT_MAX_LEAF_DOCS = Setting.intSetting( + "index.composite_index.star_tree.default.max_leaf_docs", + 10000, + 1, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * Default intervals for date dimension as part of star tree fields + */ + public static final Setting> DEFAULT_DATE_INTERVALS = Setting.listSetting( + "index.composite_index.star_tree.field.default.date_intervals", + Arrays.asList(Rounding.DateTimeUnit.MINUTES_OF_HOUR.shortName(), Rounding.DateTimeUnit.HOUR_OF_DAY.shortName()), + StarTreeIndexSettings::getTimeUnit, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + /** + * Default metrics for metrics as part of star tree fields + */ + public static final Setting> DEFAULT_METRICS_LIST = Setting.listSetting( + "index.composite_index.star_tree.field.default.metrics", + Arrays.asList( + MetricStat.AVG.toString(), + MetricStat.COUNT.toString(), + MetricStat.SUM.toString(), + MetricStat.MAX.toString(), + MetricStat.MIN.toString() + ), + MetricStat::fromTypeName, + Setting.Property.IndexScope, + Setting.Property.Final + ); + + public static Rounding.DateTimeUnit getTimeUnit(String expression) { + if (!DateHistogramAggregationBuilder.DATE_FIELD_UNITS.containsKey(expression)) { + throw new IllegalArgumentException("unknown calendar intervals specified in star tree index mapping"); + } + return DateHistogramAggregationBuilder.DATE_FIELD_UNITS.get(expression); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeValidator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeValidator.java new file mode 100644 index 0000000000000..cbed46604681d --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/StarTreeValidator.java @@ -0,0 +1,94 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.IndexSettings; +import org.opensearch.index.compositeindex.CompositeIndexSettings; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.mapper.CompositeMappedFieldType; +import org.opensearch.index.mapper.MappedFieldType; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.StarTreeMapper; + +import java.util.Locale; +import java.util.Set; + +/** + * Validations for star tree fields as part of mappings + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeValidator { + public static void validate(MapperService mapperService, CompositeIndexSettings compositeIndexSettings, IndexSettings indexSettings) { + Set compositeFieldTypes = mapperService.getCompositeFieldTypes(); + if (compositeFieldTypes.size() > StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING.get(indexSettings.getSettings())) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Index cannot have more than [%s] star tree fields", + StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING.get(indexSettings.getSettings()) + ) + ); + } + for (CompositeMappedFieldType compositeFieldType : compositeFieldTypes) { + if (!(compositeFieldType instanceof StarTreeMapper.StarTreeFieldType)) { + continue; + } + if (!compositeIndexSettings.isStarTreeIndexCreationEnabled()) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "star tree index cannot be created, enable it using [%s] setting", + CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING.getKey() + ) + ); + } + StarTreeMapper.StarTreeFieldType dataCubeFieldType = (StarTreeMapper.StarTreeFieldType) compositeFieldType; + for (Dimension dim : dataCubeFieldType.getDimensions()) { + MappedFieldType ft = mapperService.fieldType(dim.getField()); + if (ft == null) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unknown dimension field [%s] as part of star tree field", dim.getField()) + ); + } + if (ft.isAggregatable() == false) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Aggregations not supported for the dimension field [%s] with field type [%s] as part of star tree field", + dim.getField(), + ft.typeName() + ) + ); + } + } + for (Metric metric : dataCubeFieldType.getMetrics()) { + MappedFieldType ft = mapperService.fieldType(metric.getField()); + if (ft == null) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unknown metric field [%s] as part of star tree field", metric.getField()) + ); + } + if (ft.isAggregatable() == false) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Aggregations not supported for the metrics field [%s] with field type [%s] as part of star tree field", + metric.getField(), + ft.typeName() + ) + ); + } + } + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java new file mode 100644 index 0000000000000..26b99c1f17115 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregator.java @@ -0,0 +1,65 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; + +/** + * Count value aggregator for star tree + * + * @opensearch.experimental + */ +public class CountValueAggregator implements ValueAggregator { + public static final StarTreeNumericType VALUE_AGGREGATOR_TYPE = StarTreeNumericType.LONG; + + @Override + public MetricStat getAggregationType() { + return MetricStat.COUNT; + } + + @Override + public StarTreeNumericType getAggregatedValueType() { + return VALUE_AGGREGATOR_TYPE; + } + + @Override + public Long getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + return 1L; + } + + @Override + public Long mergeAggregatedValueAndSegmentValue(Long value, Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + return value + 1; + } + + @Override + public Long mergeAggregatedValues(Long value, Long aggregatedValue) { + return value + aggregatedValue; + } + + @Override + public Long getInitialAggregatedValue(Long value) { + return value; + } + + @Override + public int getMaxAggregatedValueByteSize() { + return Long.BYTES; + } + + @Override + public Long toLongValue(Long value) { + return value; + } + + @Override + public Long toStarTreeNumericTypeValue(Long value, StarTreeNumericType type) { + return value; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MaxValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MaxValueAggregator.java new file mode 100644 index 0000000000000..e80d1c9097f9d --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MaxValueAggregator.java @@ -0,0 +1,75 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.apache.lucene.util.NumericUtils; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; + +/** + * Max value aggregator for star tree + * + * @opensearch.experimental + */ +public class MaxValueAggregator implements ValueAggregator { + + public static final StarTreeNumericType VALUE_AGGREGATOR_TYPE = StarTreeNumericType.DOUBLE; + + @Override + public MetricStat getAggregationType() { + return MetricStat.MAX; + } + + @Override + public StarTreeNumericType getAggregatedValueType() { + return VALUE_AGGREGATOR_TYPE; + } + + @Override + public Double getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + return starTreeNumericType.getDoubleValue(segmentDocValue); + } + + @Override + public Double mergeAggregatedValueAndSegmentValue(Double value, Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + return Math.max(value, starTreeNumericType.getDoubleValue(segmentDocValue)); + } + + @Override + public Double mergeAggregatedValues(Double value, Double aggregatedValue) { + return Math.max(value, aggregatedValue); + } + + @Override + public Double getInitialAggregatedValue(Double value) { + return value; + } + + @Override + public int getMaxAggregatedValueByteSize() { + return Double.BYTES; + } + + @Override + public Long toLongValue(Double value) { + try { + return NumericUtils.doubleToSortableLong(value); + } catch (Exception e) { + throw new IllegalStateException("Cannot convert " + value + " to sortable long", e); + } + } + + @Override + public Double toStarTreeNumericTypeValue(Long value, StarTreeNumericType type) { + try { + return type.getDoubleValue(value); + } catch (Exception e) { + throw new IllegalStateException("Cannot convert " + value + " to sortable aggregation type", e); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java new file mode 100644 index 0000000000000..9b3c633280b08 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfo.java @@ -0,0 +1,119 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.index.fielddata.IndexNumericFieldData; + +import java.util.Comparator; +import java.util.Objects; + +/** + * Builds aggregation function and doc values field pair to support various aggregations + * + * @opensearch.experimental + */ +public class MetricAggregatorInfo implements Comparable { + + public static final String DELIMITER = "_"; + private final String metric; + private final String starFieldName; + private final MetricStat metricStat; + private final String field; + private final ValueAggregator valueAggregators; + private final StarTreeNumericType starTreeNumericType; + + /** + * Constructor for MetricAggregatorInfo + */ + public MetricAggregatorInfo(MetricStat metricStat, String field, String starFieldName, IndexNumericFieldData.NumericType numericType) { + this.metricStat = metricStat; + this.valueAggregators = ValueAggregatorFactory.getValueAggregator(metricStat); + this.starTreeNumericType = StarTreeNumericType.fromNumericType(numericType); + this.field = field; + this.starFieldName = starFieldName; + this.metric = toFieldName(); + } + + /** + * @return metric type + */ + public MetricStat getMetricStat() { + return metricStat; + } + + /** + * @return field Name + */ + public String getField() { + return field; + } + + /** + * @return the metric stat name + */ + public String getMetric() { + return metric; + } + + /** + * @return aggregator for the field value + */ + public ValueAggregator getValueAggregators() { + return valueAggregators; + } + + /** + * @return star tree aggregated value type + */ + public StarTreeNumericType getAggregatedValueType() { + return starTreeNumericType; + } + + /** + * @return field name with metric type and field + */ + public String toFieldName() { + return toFieldName(starFieldName, field, metricStat.getTypeName()); + + } + + public static String toFieldName(String starFieldName, String field, String metricName) { + return starFieldName + DELIMITER + field + DELIMITER + metricName; + } + + @Override + public int hashCode() { + return Objects.hashCode(toFieldName()); + } + + @Override + public boolean equals(Object obj) { + if (this == obj) { + return true; + } + if (obj instanceof MetricAggregatorInfo) { + MetricAggregatorInfo anotherPair = (MetricAggregatorInfo) obj; + return metricStat.equals(anotherPair.metricStat) && field.equals(anotherPair.field); + } + return false; + } + + @Override + public String toString() { + return toFieldName(); + } + + @Override + public int compareTo(MetricAggregatorInfo other) { + return Comparator.comparing((MetricAggregatorInfo o) -> o.field) + .thenComparing((MetricAggregatorInfo o) -> o.metricStat) + .compare(this, other); + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricEntry.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricEntry.java new file mode 100644 index 0000000000000..683153ecde689 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricEntry.java @@ -0,0 +1,35 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; + +/** + * Holds the pair of metric name and it's associated stat + * + * @opensearch.experimental + */ +public class MetricEntry { + + private final String metricName; + private final MetricStat metricStat; + + public MetricEntry(String metricName, MetricStat metricStat) { + this.metricName = metricName; + this.metricStat = metricStat; + } + + public String getMetricName() { + return metricName; + } + + public MetricStat getMetricStat() { + return metricStat; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MinValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MinValueAggregator.java new file mode 100644 index 0000000000000..26acf44e778a7 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MinValueAggregator.java @@ -0,0 +1,75 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.apache.lucene.util.NumericUtils; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; + +/** + * Min value aggregator for star tree + * + * @opensearch.experimental + */ +public class MinValueAggregator implements ValueAggregator { + + public static final StarTreeNumericType VALUE_AGGREGATOR_TYPE = StarTreeNumericType.DOUBLE; + + @Override + public MetricStat getAggregationType() { + return MetricStat.MIN; + } + + @Override + public StarTreeNumericType getAggregatedValueType() { + return VALUE_AGGREGATOR_TYPE; + } + + @Override + public Double getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + return starTreeNumericType.getDoubleValue(segmentDocValue); + } + + @Override + public Double mergeAggregatedValueAndSegmentValue(Double value, Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + return Math.min(value, starTreeNumericType.getDoubleValue(segmentDocValue)); + } + + @Override + public Double mergeAggregatedValues(Double value, Double aggregatedValue) { + return Math.min(value, aggregatedValue); + } + + @Override + public Double getInitialAggregatedValue(Double value) { + return value; + } + + @Override + public int getMaxAggregatedValueByteSize() { + return Double.BYTES; + } + + @Override + public Long toLongValue(Double value) { + try { + return NumericUtils.doubleToSortableLong(value); + } catch (Exception e) { + throw new IllegalStateException("Cannot convert " + value + " to sortable long", e); + } + } + + @Override + public Double toStarTreeNumericTypeValue(Long value, StarTreeNumericType type) { + try { + return type.getDoubleValue(value); + } catch (Exception e) { + throw new IllegalStateException("Cannot convert " + value + " to sortable aggregation type", e); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java new file mode 100644 index 0000000000000..543b0f7f42374 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregator.java @@ -0,0 +1,97 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.apache.lucene.util.NumericUtils; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.search.aggregations.metrics.CompensatedSum; + +/** + * Sum value aggregator for star tree + * + * @opensearch.experimental + */ +public class SumValueAggregator implements ValueAggregator { + + public static final StarTreeNumericType VALUE_AGGREGATOR_TYPE = StarTreeNumericType.DOUBLE; + private double sum = 0; + private double compensation = 0; + private CompensatedSum kahanSummation = new CompensatedSum(0, 0); + + @Override + public MetricStat getAggregationType() { + return MetricStat.SUM; + } + + @Override + public StarTreeNumericType getAggregatedValueType() { + return VALUE_AGGREGATOR_TYPE; + } + + @Override + public Double getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + kahanSummation.reset(0, 0); + kahanSummation.add(starTreeNumericType.getDoubleValue(segmentDocValue)); + compensation = kahanSummation.delta(); + sum = kahanSummation.value(); + return kahanSummation.value(); + } + + @Override + public Double mergeAggregatedValueAndSegmentValue(Double value, Long segmentDocValue, StarTreeNumericType starTreeNumericType) { + assert kahanSummation.value() == value; + kahanSummation.reset(sum, compensation); + kahanSummation.add(starTreeNumericType.getDoubleValue(segmentDocValue)); + compensation = kahanSummation.delta(); + sum = kahanSummation.value(); + return kahanSummation.value(); + } + + @Override + public Double mergeAggregatedValues(Double value, Double aggregatedValue) { + assert kahanSummation.value() == aggregatedValue; + kahanSummation.reset(sum, compensation); + kahanSummation.add(value); + compensation = kahanSummation.delta(); + sum = kahanSummation.value(); + return kahanSummation.value(); + } + + @Override + public Double getInitialAggregatedValue(Double value) { + kahanSummation.reset(0, 0); + kahanSummation.add(value); + compensation = kahanSummation.delta(); + sum = kahanSummation.value(); + return kahanSummation.value(); + } + + @Override + public int getMaxAggregatedValueByteSize() { + return Double.BYTES; + } + + @Override + public Long toLongValue(Double value) { + try { + return NumericUtils.doubleToSortableLong(value); + } catch (Exception e) { + throw new IllegalStateException("Cannot convert " + value + " to sortable long", e); + } + } + + @Override + public Double toStarTreeNumericTypeValue(Long value, StarTreeNumericType type) { + try { + return type.getDoubleValue(value); + } catch (Exception e) { + throw new IllegalStateException("Cannot convert " + value + " to sortable aggregation type", e); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java new file mode 100644 index 0000000000000..3dd1f85845c17 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregator.java @@ -0,0 +1,64 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; + +/** + * A value aggregator that pre-aggregates on the input values for a specific type of aggregation. + * + * @opensearch.experimental + */ +public interface ValueAggregator { + + /** + * Returns the type of the aggregation. + */ + MetricStat getAggregationType(); + + /** + * Returns the data type of the aggregated value. + */ + StarTreeNumericType getAggregatedValueType(); + + /** + * Returns the initial aggregated value. + */ + A getInitialAggregatedValueForSegmentDocValue(Long segmentDocValue, StarTreeNumericType starTreeNumericType); + + /** + * Applies a segment doc value to the current aggregated value. + */ + A mergeAggregatedValueAndSegmentValue(A value, Long segmentDocValue, StarTreeNumericType starTreeNumericType); + + /** + * Applies an aggregated value to the current aggregated value. + */ + A mergeAggregatedValues(A value, A aggregatedValue); + + /** + * Clones an aggregated value. + */ + A getInitialAggregatedValue(A value); + + /** + * Returns the maximum size in bytes of the aggregated values seen so far. + */ + int getMaxAggregatedValueByteSize(); + + /** + * Converts an aggregated value into a Long type. + */ + Long toLongValue(A value); + + /** + * Converts an aggregated value from a Long type. + */ + A toStarTreeNumericTypeValue(Long rawValue, StarTreeNumericType type); +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java new file mode 100644 index 0000000000000..79103679d5260 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactory.java @@ -0,0 +1,64 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; + +/** + * Value aggregator factory for a given aggregation type + * + * @opensearch.experimental + */ +public class ValueAggregatorFactory { + private ValueAggregatorFactory() {} + + /** + * Returns a new instance of value aggregator for the given aggregation type. + * + * @param aggregationType Aggregation type + * @return Value aggregator + */ + public static ValueAggregator getValueAggregator(MetricStat aggregationType) { + switch (aggregationType) { + // avg aggregator will be covered in the part of query (using count and sum) + case SUM: + return new SumValueAggregator(); + case COUNT: + return new CountValueAggregator(); + case MIN: + return new MinValueAggregator(); + case MAX: + return new MaxValueAggregator(); + default: + throw new IllegalStateException("Unsupported aggregation type: " + aggregationType); + } + } + + /** + * Returns the data type of the aggregated value for the given aggregation type. + * + * @param aggregationType Aggregation type + * @return Data type of the aggregated value + */ + public static StarTreeNumericType getAggregatedValueType(MetricStat aggregationType) { + switch (aggregationType) { + // other metric types (count, min, max, avg) will be supported in the future + case SUM: + return SumValueAggregator.VALUE_AGGREGATOR_TYPE; + case COUNT: + return CountValueAggregator.VALUE_AGGREGATOR_TYPE; + case MIN: + return MinValueAggregator.VALUE_AGGREGATOR_TYPE; + case MAX: + return MaxValueAggregator.VALUE_AGGREGATOR_TYPE; + default: + throw new IllegalStateException("Unsupported aggregation type: " + aggregationType); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericType.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericType.java new file mode 100644 index 0000000000000..57fe573a6a93c --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericType.java @@ -0,0 +1,66 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype; + +import org.opensearch.index.fielddata.IndexNumericFieldData; + +import java.util.function.Function; + +/** + * Enum to map Star Tree Numeric Types to Lucene's Numeric Type + * + * @opensearch.experimental + */ +public enum StarTreeNumericType { + + // TODO: Handle scaled floats + HALF_FLOAT(IndexNumericFieldData.NumericType.HALF_FLOAT, StarTreeNumericTypeConverters::halfFloatPointToDouble), + FLOAT(IndexNumericFieldData.NumericType.FLOAT, StarTreeNumericTypeConverters::floatPointToDouble), + LONG(IndexNumericFieldData.NumericType.LONG, StarTreeNumericTypeConverters::longToDouble), + DOUBLE(IndexNumericFieldData.NumericType.DOUBLE, StarTreeNumericTypeConverters::sortableLongtoDouble), + INT(IndexNumericFieldData.NumericType.INT, StarTreeNumericTypeConverters::intToDouble), + SHORT(IndexNumericFieldData.NumericType.SHORT, StarTreeNumericTypeConverters::shortToDouble), + BYTE(IndexNumericFieldData.NumericType.BYTE, StarTreeNumericTypeConverters::bytesToDouble), + UNSIGNED_LONG(IndexNumericFieldData.NumericType.UNSIGNED_LONG, StarTreeNumericTypeConverters::unsignedlongToDouble); + + final IndexNumericFieldData.NumericType numericType; + final Function converter; + + StarTreeNumericType(IndexNumericFieldData.NumericType numericType, Function converter) { + this.numericType = numericType; + this.converter = converter; + } + + public double getDoubleValue(long rawValue) { + return this.converter.apply(rawValue); + } + + public static StarTreeNumericType fromNumericType(IndexNumericFieldData.NumericType numericType) { + switch (numericType) { + case HALF_FLOAT: + return StarTreeNumericType.HALF_FLOAT; + case FLOAT: + return StarTreeNumericType.FLOAT; + case LONG: + return StarTreeNumericType.LONG; + case DOUBLE: + return StarTreeNumericType.DOUBLE; + case INT: + return StarTreeNumericType.INT; + case SHORT: + return StarTreeNumericType.SHORT; + case UNSIGNED_LONG: + return StarTreeNumericType.UNSIGNED_LONG; + case BYTE: + return StarTreeNumericType.BYTE; + default: + throw new UnsupportedOperationException("Unknown numeric type [" + numericType + "]"); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericTypeConverters.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericTypeConverters.java new file mode 100644 index 0000000000000..eb7647c4f9851 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/StarTreeNumericTypeConverters.java @@ -0,0 +1,58 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype; + +import org.apache.lucene.sandbox.document.HalfFloatPoint; +import org.apache.lucene.util.NumericUtils; +import org.opensearch.common.Numbers; +import org.opensearch.common.annotation.ExperimentalApi; + +/** + * Numeric converters used during aggregations of metric values + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeNumericTypeConverters { + + public static double halfFloatPointToDouble(Long value) { + return HalfFloatPoint.sortableShortToHalfFloat((short) value.longValue()); + } + + public static double floatPointToDouble(Long value) { + return NumericUtils.sortableIntToFloat((int) value.longValue()); + } + + public static double longToDouble(Long value) { + return (double) value; + } + + public static double intToDouble(Long value) { + return (double) value; + } + + public static double shortToDouble(Long value) { + return (double) value; + } + + public static Double sortableLongtoDouble(Long value) { + return NumericUtils.sortableLongToDouble(value); + } + + public static double unsignedlongToDouble(Long value) { + return Numbers.unsignedLongToDouble(value); + } + + public static double bytesToDouble(Long value) { + byte[] bytes = new byte[8]; + NumericUtils.longToSortableBytes(value, bytes, 0); + return NumericUtils.sortableLongToDouble(NumericUtils.sortableBytesToLong(bytes, 0)); + } + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/package-info.java new file mode 100644 index 0000000000000..18163056ec3b5 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/numerictype/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Numeric Types for Composite Index Star Tree + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/package-info.java new file mode 100644 index 0000000000000..bddd6a46fbbe8 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Aggregators for Composite Index Star Tree + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.aggregators; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java new file mode 100644 index 0000000000000..d28607c067a61 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilder.java @@ -0,0 +1,310 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.index.BaseStarTreeBuilder; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.store.IndexOutput; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; +import org.opensearch.index.mapper.MapperService; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Objects; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * On heap based single tree builder + * + * @opensearch.experimental + */ +@ExperimentalApi +public class OnHeapStarTreeBuilder extends BaseStarTreeBuilder { + + private final List starTreeDocuments = new ArrayList<>(); + + /** + * Constructor for OnHeapStarTreeBuilder + * + * @param starTreeField star-tree field + * @param segmentWriteState segment write state + * @param mapperService helps with the numeric type of field + * @throws IOException throws an exception when we are unable to construct a star-tree using on-heap approach + */ + public OnHeapStarTreeBuilder( + IndexOutput metaOut, + IndexOutput dataOut, + StarTreeField starTreeField, + SegmentWriteState segmentWriteState, + MapperService mapperService + ) throws IOException { + super(metaOut, dataOut, starTreeField, segmentWriteState, mapperService); + } + + @Override + public void appendStarTreeDocument(StarTreeDocument starTreeDocument) throws IOException { + starTreeDocuments.add(starTreeDocument); + } + + @Override + public void build( + List starTreeValuesSubs, + AtomicInteger fieldNumberAcrossStarTrees, + DocValuesConsumer starTreeDocValuesConsumer + ) throws IOException { + build(mergeStarTrees(starTreeValuesSubs), fieldNumberAcrossStarTrees, starTreeDocValuesConsumer); + } + + /** + * Sorts and aggregates the star-tree documents from multiple segments and builds star tree based on the newly + * aggregated star-tree documents + * + * @param starTreeValuesSubs StarTreeValues from multiple segments + * @return iterator of star tree documents + */ + Iterator mergeStarTrees(List starTreeValuesSubs) throws IOException { + return sortAndAggregateStarTreeDocuments(mergeStarTreeValues(starTreeValuesSubs)); + } + + /** + * Returns an array of all the starTreeDocuments from all the segments + * + * @param starTreeValuesSubs StarTreeValues from multiple segments + * @return array of star tree documents + */ + StarTreeDocument[] mergeStarTreeValues(List starTreeValuesSubs) throws IOException { + List starTreeDocuments = new ArrayList<>(); + for (StarTreeValues starTreeValues : starTreeValuesSubs) { + List dimensionsSplitOrder = starTreeValues.getStarTreeField().getDimensionsOrder(); + SequentialDocValuesIterator[] dimensionReaders = new SequentialDocValuesIterator[starTreeValues.getStarTreeField() + .getDimensionsOrder() + .size()]; + + for (int i = 0; i < dimensionsSplitOrder.size(); i++) { + String dimension = dimensionsSplitOrder.get(i).getField(); + dimensionReaders[i] = new SequentialDocValuesIterator(starTreeValues.getDimensionDocValuesIteratorMap().get(dimension)); + } + + List metricReaders = new ArrayList<>(); + for (Map.Entry metricDocValuesEntry : starTreeValues.getMetricDocValuesIteratorMap().entrySet()) { + metricReaders.add(new SequentialDocValuesIterator(metricDocValuesEntry.getValue())); + } + + boolean endOfDoc = false; + int currentDocId = 0; + while (!endOfDoc) { + Long[] dims = new Long[starTreeValues.getStarTreeField().getDimensionsOrder().size()]; + int i = 0; + for (SequentialDocValuesIterator dimensionDocValueIterator : dimensionReaders) { + int doc = dimensionDocValueIterator.nextDoc(currentDocId); + Long val = dimensionDocValueIterator.value(currentDocId); + // TODO : figure out how to identify a row with star tree docs here + endOfDoc = (doc == DocIdSetIterator.NO_MORE_DOCS); + if (endOfDoc) { + break; + } + dims[i] = val; + i++; + } + if (endOfDoc) { + break; + } + i = 0; + Object[] metrics = new Object[metricReaders.size()]; + for (SequentialDocValuesIterator metricDocValuesIterator : metricReaders) { + metricDocValuesIterator.nextDoc(currentDocId); + metrics[i] = metricDocValuesIterator.value(currentDocId); + i++; + } + StarTreeDocument starTreeDocument = new StarTreeDocument(dims, metrics); + starTreeDocuments.add(starTreeDocument); + currentDocId++; + } + } + StarTreeDocument[] starTreeDocumentsArr = new StarTreeDocument[starTreeDocuments.size()]; + return starTreeDocuments.toArray(starTreeDocumentsArr); + } + + @Override + public StarTreeDocument getStarTreeDocument(int docId) throws IOException { + return starTreeDocuments.get(docId); + } + + @Override + public List getStarTreeDocuments() { + return starTreeDocuments; + } + + @Override + public Long getDimensionValue(int docId, int dimensionId) throws IOException { + return starTreeDocuments.get(docId).dimensions[dimensionId]; + } + + /** + * Sorts and aggregates all the documents of the segment based on dimension and metrics configuration + * + * @param numDocs number of documents in the given segment + * @param dimensionReaders List of docValues readers to read dimensions from the segment + * @param metricReaders List of docValues readers to read metrics from the segment + * @return Iterator of star-tree documents + * + */ + @Override + public Iterator sortAndAggregateSegmentDocuments( + int numDocs, + SequentialDocValuesIterator[] dimensionReaders, + List metricReaders + ) throws IOException { + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[numDocs]; + for (int currentDocId = 0; currentDocId < numDocs; currentDocId++) { + starTreeDocuments[currentDocId] = getSegmentStarTreeDocument(currentDocId, dimensionReaders, metricReaders); + } + return sortAndAggregateStarTreeDocuments(starTreeDocuments); + } + + /** + * Sorts and aggregates the star-tree documents + * + * @param starTreeDocuments star-tree documents + * @return iterator for star-tree documents + * @throws IOException throws when unable to sort, merge and aggregate star-tree documents + */ + public Iterator sortAndAggregateStarTreeDocuments(StarTreeDocument[] starTreeDocuments) throws IOException { + // sort the documents + Arrays.sort(starTreeDocuments, (o1, o2) -> { + for (int i = 0; i < numDimensions; i++) { + if (o1.dimensions[i] == null && o2.dimensions[i] == null) { + return 0; + } + if (o1.dimensions[i] == null) { + return 1; + } + if (o2.dimensions[i] == null) { + return -1; + } + if (!Objects.equals(o1.dimensions[i], o2.dimensions[i])) { + return Long.compare(o1.dimensions[i], o2.dimensions[i]); + } + } + return 0; + }); + + // merge the documents + return mergeStarTreeDocuments(starTreeDocuments); + } + + /** + * Merges the star-tree documents based on dimensions + * + * @param starTreeDocuments star-tree documents + * @return iterator to aggregate star-tree documents + */ + private Iterator mergeStarTreeDocuments(StarTreeDocument[] starTreeDocuments) { + return new Iterator<>() { + boolean hasNext = true; + StarTreeDocument currentStarTreeDocument = starTreeDocuments[0]; + int docId = 1; + + @Override + public boolean hasNext() { + return hasNext; + } + + @Override + public StarTreeDocument next() { + // aggregate as we move on to the next doc + StarTreeDocument next = reduceSegmentStarTreeDocuments(null, currentStarTreeDocument); + while (docId < starTreeDocuments.length) { + StarTreeDocument starTreeDocument = starTreeDocuments[docId++]; + if (!Arrays.equals(starTreeDocument.dimensions, next.dimensions)) { + currentStarTreeDocument = starTreeDocument; + return next; + } else { + next = reduceSegmentStarTreeDocuments(next, starTreeDocument); + } + } + hasNext = false; + return next; + } + }; + } + + /** + * Generates a star-tree for a given star-node + * + * @param startDocId Start document id in the star-tree + * @param endDocId End document id (exclusive) in the star-tree + * @param dimensionId Dimension id of the star-node + * @return iterator for star-tree documents of star-node + * @throws IOException throws when unable to generate star-tree for star-node + */ + @Override + public Iterator generateStarTreeDocumentsForStarNode(int startDocId, int endDocId, int dimensionId) + throws IOException { + int numDocs = endDocId - startDocId; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[numDocs]; + for (int i = 0; i < numDocs; i++) { + starTreeDocuments[i] = getStarTreeDocument(startDocId + i); + } + Arrays.sort(starTreeDocuments, (o1, o2) -> { + for (int i = dimensionId + 1; i < numDimensions; i++) { + if (!Objects.equals(o1.dimensions[i], o2.dimensions[i])) { + return Long.compare(o1.dimensions[i], o2.dimensions[i]); + } + } + return 0; + }); + return new Iterator() { + boolean hasNext = true; + StarTreeDocument currentStarTreeDocument = starTreeDocuments[0]; + int docId = 1; + + private boolean hasSameDimensions(StarTreeDocument starTreeDocument1, StarTreeDocument starTreeDocument2) { + for (int i = dimensionId + 1; i < numDimensions; i++) { + if (!Objects.equals(starTreeDocument1.dimensions[i], starTreeDocument2.dimensions[i])) { + return false; + } + } + return true; + } + + @Override + public boolean hasNext() { + return hasNext; + } + + @Override + public StarTreeDocument next() { + StarTreeDocument next = reduceStarTreeDocuments(null, currentStarTreeDocument); + next.dimensions[dimensionId] = STAR_IN_DOC_VALUES_INDEX; + while (docId < numDocs) { + StarTreeDocument starTreeDocument = starTreeDocuments[docId++]; + if (!hasSameDimensions(starTreeDocument, currentStarTreeDocument)) { + currentStarTreeDocument = starTreeDocument; + return next; + } else { + next = reduceStarTreeDocuments(next, starTreeDocument); + } + } + hasNext = false; + return next; + } + }; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java new file mode 100644 index 0000000000000..4c0e8ad8938e7 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreeBuilder.java @@ -0,0 +1,58 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesProducer; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; + +import java.io.Closeable; +import java.io.IOException; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * A star-tree builder that builds a single star-tree. + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface StarTreeBuilder extends Closeable { + + /** + * Builds the star tree from the original segment documents + * + * @param fieldProducerMap contains the docValues producer to get docValues associated with each field + * @param fieldNumberAcrossStarTrees maintains the unique field number across the fields in the star tree + * @param starTreeDocValuesConsumer + * @throws IOException when we are unable to build star-tree + */ + + void build( + Map fieldProducerMap, + AtomicInteger fieldNumberAcrossStarTrees, + DocValuesConsumer starTreeDocValuesConsumer + ) throws IOException; + + /** + * Builds the star tree using StarTree values from multiple segments + * + * @param starTreeValuesSubs contains the star tree values from multiple segments + * @param fieldNumberAcrossStarTrees maintains the unique field number across the fields in the star tree + * @param starTreeDocValuesConsumer + * @throws IOException when we are unable to build star-tree + */ + void build( + List starTreeValuesSubs, + AtomicInteger fieldNumberAcrossStarTrees, + DocValuesConsumer starTreeDocValuesConsumer + ) throws IOException; +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java new file mode 100644 index 0000000000000..e4bf365332e7a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/StarTreesBuilder.java @@ -0,0 +1,148 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.store.IndexOutput; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.mapper.CompositeMappedFieldType; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.StarTreeMapper; + +import java.io.Closeable; +import java.io.IOException; +import java.util.ArrayList; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; + +/** + * Builder to construct star-trees based on multiple star-tree fields. + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreesBuilder implements Closeable { + + private static final Logger logger = LogManager.getLogger(StarTreesBuilder.class); + + private final List starTreeFields; + private final SegmentWriteState state; + private final MapperService mapperService; + private AtomicInteger fieldNumberAcrossStarTrees; + + public StarTreesBuilder(SegmentWriteState segmentWriteState, MapperService mapperService) { + List starTreeFields = new ArrayList<>(); + for (CompositeMappedFieldType compositeMappedFieldType : mapperService.getCompositeFieldTypes()) { + if (compositeMappedFieldType instanceof StarTreeMapper.StarTreeFieldType) { + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) compositeMappedFieldType; + starTreeFields.add( + new StarTreeField( + starTreeFieldType.name(), + starTreeFieldType.getDimensions(), + starTreeFieldType.getMetrics(), + starTreeFieldType.getStarTreeConfig() + ) + ); + } + } + this.starTreeFields = starTreeFields; + this.state = segmentWriteState; + this.mapperService = mapperService; + this.fieldNumberAcrossStarTrees = new AtomicInteger(); + } + + /** + * Builds the star-trees. + */ + public void build( + IndexOutput metaOut, + IndexOutput dataOut, + Map fieldProducerMap, + DocValuesConsumer starTreeDocValuesConsumer + ) throws IOException { + if (starTreeFields.isEmpty()) { + logger.debug("no star-tree fields found, returning from star-tree builder"); + return; + } + long startTime = System.currentTimeMillis(); + + int numStarTrees = starTreeFields.size(); + logger.debug("Starting building {} star-trees with star-tree fields", numStarTrees); + + // Build all star-trees + for (int i = 0; i < numStarTrees; i++) { + StarTreeField starTreeField = starTreeFields.get(i); + try (StarTreeBuilder starTreeBuilder = getSingleTreeBuilder(metaOut, dataOut, starTreeField, state, mapperService)) { + starTreeBuilder.build(fieldProducerMap, fieldNumberAcrossStarTrees, starTreeDocValuesConsumer); + } + } + logger.debug("Took {} ms to building {} star-trees with star-tree fields", System.currentTimeMillis() - startTime, numStarTrees); + } + + @Override + public void close() throws IOException { + // TODO : close files + } + + /** + * Merges star tree fields from multiple segments + * + * @param metaOut + * @param dataOut + * @param starTreeFieldMap StarTreeField configuration per field + * @param starTreeValuesSubsPerField starTreeValuesSubs per field + * @param starTreeDocValuesConsumer + */ + public void buildDuringMerge( + IndexOutput metaOut, + IndexOutput dataOut, + final Map starTreeFieldMap, + final Map> starTreeValuesSubsPerField, + DocValuesConsumer starTreeDocValuesConsumer + ) throws IOException { + for (Map.Entry> entry : starTreeValuesSubsPerField.entrySet()) { + List starTreeValuesList = entry.getValue(); + StarTreeField starTreeField = starTreeFieldMap.get(entry.getKey()); + StarTreeBuilder builder = getSingleTreeBuilder(metaOut, dataOut, starTreeField, state, mapperService); + builder.build(starTreeValuesList, fieldNumberAcrossStarTrees, starTreeDocValuesConsumer); + } + } + + /** + * Get star-tree builder based on build mode. + */ + private static StarTreeBuilder getSingleTreeBuilder( + IndexOutput metaOut, + IndexOutput dataOut, + StarTreeField starTreeField, + SegmentWriteState state, + MapperService mapperService + ) throws IOException { + switch (starTreeField.getStarTreeConfig().getBuildMode()) { + case ON_HEAP: + return new OnHeapStarTreeBuilder(metaOut, dataOut, starTreeField, state, mapperService); + default: + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "No star tree implementation is available for [%s] build mode", + starTreeField.getStarTreeConfig().getBuildMode() + ) + ); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/package-info.java new file mode 100644 index 0000000000000..9c97b076371a3 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/builder/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Builders for Composite Index Star Tree + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.builder; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/StarTreeMetadata.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/StarTreeMetadata.java new file mode 100644 index 0000000000000..e328cb061e3f1 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/StarTreeMetadata.java @@ -0,0 +1,228 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.meta; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.index.CorruptIndexException; +import org.apache.lucene.store.IndexInput; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricEntry; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +/** + * Holds the associated metadata for the building of star-tree + * + * @opensearch.experimental + */ +public class StarTreeMetadata implements TreeMetadata { + private static final Logger logger = LogManager.getLogger(TreeMetadata.class); + private final IndexInput meta; + private final String starTreeFieldName; + private final String starTreeFieldType; + private final List dimensionFieldNumbers; + private final List metricEntries; + private final Integer segmentAggregatedDocCount; + private final Integer maxLeafDocs; + private final Set skipStarNodeCreationInDims; + private final StarTreeFieldConfiguration.StarTreeBuildMode starTreeBuildMode; + private final long dataStartFilePointer; + private final long dataLength; + + public StarTreeMetadata(IndexInput meta, String compositeFieldName, String compositeFieldType) throws IOException { + this.meta = meta; + try { + this.starTreeFieldName = compositeFieldName; + this.starTreeFieldType = compositeFieldType; + this.dimensionFieldNumbers = readStarTreeDimensions(); + this.metricEntries = readMetricEntries(); + this.segmentAggregatedDocCount = readSegmentAggregatedDocCount(); + this.maxLeafDocs = readMaxLeafDocs(); + this.skipStarNodeCreationInDims = readSkipStarNodeCreationInDims(); + this.starTreeBuildMode = readBuildMode(); + this.dataStartFilePointer = readDataStartFilePointer(); + this.dataLength = readDataLength(); + } catch (Exception e) { + logger.error("Unable to read star-tree metadata from the file"); + throw new CorruptIndexException("Unable to read star-tree metadata from the file", meta); + } + } + + @Override + public int readDimensionsCount() throws IOException { + return meta.readInt(); + } + + @Override + public List readStarTreeDimensions() throws IOException { + int dimensionCount = readDimensionsCount(); + List dimensionFieldNumbers = new ArrayList<>(); + + for (int i = 0; i < dimensionCount; i++) { + dimensionFieldNumbers.add(meta.readInt()); + } + + return dimensionFieldNumbers; + } + + @Override + public int readMetricsCount() throws IOException { + return meta.readInt(); + } + + @Override + public List readMetricEntries() throws IOException { + int metricCount = readMetricsCount(); + List metricEntries = new ArrayList<>(); + + for (int i = 0; i < metricCount; i++) { + String metricName = meta.readString(); + String metricStat = meta.readString(); + metricEntries.add(new MetricEntry(metricName, MetricStat.fromTypeName(metricStat))); + } + + return metricEntries; + } + + @Override + public int readSegmentAggregatedDocCount() throws IOException { + return meta.readInt(); + } + + @Override + public int readMaxLeafDocs() throws IOException { + return meta.readInt(); + } + + @Override + public int readSkipStarNodeCreationInDimsCount() throws IOException { + return meta.readInt(); + } + + @Override + public Set readSkipStarNodeCreationInDims() throws IOException { + + int skipStarNodeCreationInDimsCount = readSkipStarNodeCreationInDimsCount(); + Set skipStarNodeCreationInDims = new HashSet<>(); + for (int i = 0; i < skipStarNodeCreationInDimsCount; i++) { + skipStarNodeCreationInDims.add(meta.readInt()); + } + return skipStarNodeCreationInDims; + } + + @Override + public StarTreeFieldConfiguration.StarTreeBuildMode readBuildMode() throws IOException { + return StarTreeFieldConfiguration.StarTreeBuildMode.fromTypeName(meta.readString()); + } + + @Override + public long readDataStartFilePointer() throws IOException { + return meta.readLong(); + } + + @Override + public long readDataLength() throws IOException { + return meta.readLong(); + } + + /** + * Returns the name of the star-tree field. + * + * @return star-tree field name + */ + public String getStarTreeFieldName() { + return starTreeFieldName; + } + + /** + * Returns the type of the star tree field. + * + * @return star-tree field type + */ + public String getStarTreeFieldType() { + return starTreeFieldType; + } + + /** + * Returns the list of dimension field numbers. + * + * @return star-tree dimension field numbers + */ + public List getDimensionFieldNumbers() { + return dimensionFieldNumbers; + } + + /** + * Returns the list of metric entries. + * + * @return star-tree metric entries + */ + public List getMetricEntries() { + return metricEntries; + } + + /** + * Returns the aggregated document count for the star-tree. + * + * @return the aggregated document count for the star-tree. + */ + public Integer getSegmentAggregatedDocCount() { + return segmentAggregatedDocCount; + } + + /** + * Returns the max leaf docs for the star-tree. + * + * @return the max leaf docs. + */ + public Integer getMaxLeafDocs() { + return maxLeafDocs; + } + + /** + * Returns the set of dimensions for which star node will not be created in the star-tree. + * + * @return the set of dimensions. + */ + public Set getSkipStarNodeCreationInDims() { + return skipStarNodeCreationInDims; + } + + /** + * Returns the build mode for the star-tree. + * + * @return the star-tree build mode. + */ + public StarTreeFieldConfiguration.StarTreeBuildMode getStarTreeBuildMode() { + return starTreeBuildMode; + } + + /** + * Returns the file pointer to the start of the star-tree data. + * + * @return start file pointer for star-tree data + */ + public long getDataStartFilePointer() { + return dataStartFilePointer; + } + + /** + * Returns the length of star-tree data + * + * @return star-tree length + */ + public long getDataLength() { + return dataLength; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/TreeMetadata.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/TreeMetadata.java new file mode 100644 index 0000000000000..9859afad95a74 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/TreeMetadata.java @@ -0,0 +1,112 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.meta; + +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricEntry; + +import java.io.IOException; +import java.util.List; +import java.util.Set; + +/** + * An interface for metadata of the star-tree + * + * @opensearch.experimental + */ +public interface TreeMetadata { + + /** + * Reads the count of dimensions in the star-tree. + * + * @return the count of dimensions + * @throws IOException if an I/O error occurs while reading the dimensions count + */ + int readDimensionsCount() throws IOException; + + /** + * Reads the list of dimension ordinals in the star-tree. + * + * @return the list of dimension ordinals + * @throws IOException if an I/O error occurs while reading the dimension ordinals + */ + List readStarTreeDimensions() throws IOException; + + /** + * Reads the count of metrics in the star-tree. + * + * @return the count of metrics + * @throws IOException if an I/O error occurs while reading the metrics count + */ + int readMetricsCount() throws IOException; + + /** + * Reads the list of metric entries in the star-tree. + * + * @return the list of metric entries + * @throws IOException if an I/O error occurs while reading the metric entries + */ + List readMetricEntries() throws IOException; + + /** + * Reads the aggregated document count for the segment in the star-tree. + * + * @return the aggregated document count for the segment + * @throws IOException if an I/O error occurs while reading the aggregated document count + */ + int readSegmentAggregatedDocCount() throws IOException; + + /** + * Reads the max leaf docs for the star-tree. + * + * @return the max leaf docs for the star-tree + * @throws IOException if an I/O error occurs while reading the max leaf docs + */ + int readMaxLeafDocs() throws IOException; + + /** + * Reads the count of dimensions where star node will not be created in the star-tree. + * + * @return the count of dimensions + * @throws IOException if an I/O error occurs while reading the skip star node dimensions count + */ + int readSkipStarNodeCreationInDimsCount() throws IOException; + + /** + * Reads the list of dimensions field numbers to be skipped for star node creation in the star-tree. + * + * @return the set of dimensions field numbers to be skipped for star node creation. + * @throws IOException if an I/O error occurs while reading the dimensions + */ + Set readSkipStarNodeCreationInDims() throws IOException; + + /** + * Reads the build mode for the star-tree. + * + * @return the star-tree build mode + * @throws IOException if an I/O error occurs while reading the build mode + */ + StarTreeFieldConfiguration.StarTreeBuildMode readBuildMode() throws IOException; + + /** + * Reads the file pointer to the start of the star-tree data. + * + * @return the file pointer to the start of the star-tree data + * @throws IOException if an I/O error occurs while reading the star-tree data start file pointer + */ + long readDataStartFilePointer() throws IOException; + + /** + * Reads the length of the data of the star-tree. + * + * @return the length of the data + * @throws IOException if an I/O error occurs while reading the data length + */ + long readDataLength() throws IOException; +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/package-info.java new file mode 100644 index 0000000000000..568cacea4b59a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/meta/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Meta package for star tree + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.meta; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/OffHeapStarTree.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/OffHeapStarTree.java new file mode 100644 index 0000000000000..1f16bd924dac6 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/OffHeapStarTree.java @@ -0,0 +1,65 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.node; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.store.IndexInput; +import org.apache.lucene.store.RandomAccessInput; +import org.opensearch.index.compositeindex.datacube.startree.meta.StarTreeMetadata; + +import java.io.IOException; + +import static org.opensearch.index.compositeindex.CompositeIndexConstants.MAGIC_MARKER; +import static org.opensearch.index.compositeindex.CompositeIndexConstants.VERSION; + +/** + * Off heap implementation of the star-tree. + * + * @opensearch.experimental + */ +public class OffHeapStarTree implements StarTree { + private static final Logger logger = LogManager.getLogger(OffHeapStarTree.class); + private final OffHeapStarTreeNode root; + private final Integer numNodes; + + public OffHeapStarTree(IndexInput data, StarTreeMetadata starTreeMetadata) throws IOException { + long magicMarker = data.readLong(); + if (MAGIC_MARKER != magicMarker) { + logger.error("Invalid magic marker"); + throw new IOException("Invalid magic marker"); + } + int version = data.readInt(); + if (VERSION != version) { + logger.error("Invalid star tree version"); + throw new IOException("Invalid version"); + } + numNodes = data.readInt(); // num nodes + + RandomAccessInput in = data.randomAccessSlice( + starTreeMetadata.getDataStartFilePointer(), + starTreeMetadata.getDataStartFilePointer() + starTreeMetadata.getDataLength() + ); + root = new OffHeapStarTreeNode(in, 0); + } + + @Override + public StarTreeNode getRoot() { + return root; + } + + /** + * Returns the number of nodes in star-tree + * + * @return number of nodes in te star-tree + */ + public Integer getNumNodes() { + return numNodes; + } + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/OffHeapStarTreeNode.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/OffHeapStarTreeNode.java new file mode 100644 index 0000000000000..91ad56ef70eb3 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/OffHeapStarTreeNode.java @@ -0,0 +1,183 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.node; + +import org.apache.lucene.store.RandomAccessInput; + +import java.io.IOException; +import java.util.Iterator; + +/** + * Off heap implementation of {@link StarTreeNode} + * + * @opensearch.experimental + */ +public class OffHeapStarTreeNode implements StarTreeNode { + public static final int NUM_INT_SERIALIZABLE_FIELDS = 7; + public static final int NUM_LONG_SERIALIZABLE_FIELDS = 1; + public static final long SERIALIZABLE_DATA_SIZE_IN_BYTES = (Integer.BYTES * NUM_INT_SERIALIZABLE_FIELDS) + (Long.BYTES + * NUM_LONG_SERIALIZABLE_FIELDS); + private static final int DIMENSION_ID_OFFSET = 0; + private static final int DIMENSION_VALUE_OFFSET = DIMENSION_ID_OFFSET + Integer.BYTES; + private static final int START_DOC_ID_OFFSET = DIMENSION_VALUE_OFFSET + Long.BYTES; + private static final int END_DOC_ID_OFFSET = START_DOC_ID_OFFSET + Integer.BYTES; + private static final int AGGREGATE_DOC_ID_OFFSET = END_DOC_ID_OFFSET + Integer.BYTES; + private static final int IS_STAR_NODE_OFFSET = AGGREGATE_DOC_ID_OFFSET + Integer.BYTES; + private static final int FIRST_CHILD_ID_OFFSET = IS_STAR_NODE_OFFSET + Integer.BYTES; + private static final int LAST_CHILD_ID_OFFSET = FIRST_CHILD_ID_OFFSET + Integer.BYTES; + + public static final int INVALID_ID = -1; + + private final int nodeId; + private final int firstChildId; + + RandomAccessInput in; + + public OffHeapStarTreeNode(RandomAccessInput in, int nodeId) throws IOException { + this.in = in; + this.nodeId = nodeId; + firstChildId = getInt(FIRST_CHILD_ID_OFFSET); + } + + private int getInt(int fieldOffset) throws IOException { + return in.readInt(nodeId * SERIALIZABLE_DATA_SIZE_IN_BYTES + fieldOffset); + } + + private long getLong(int fieldOffset) throws IOException { + return in.readLong(nodeId * SERIALIZABLE_DATA_SIZE_IN_BYTES + fieldOffset); + } + + private byte getByte(int fieldOffset) throws IOException { + return in.readByte(nodeId * SERIALIZABLE_DATA_SIZE_IN_BYTES + fieldOffset); + } + + @Override + public int getDimensionId() throws IOException { + return getInt(DIMENSION_ID_OFFSET); + } + + @Override + public long getDimensionValue() throws IOException { + return getLong(DIMENSION_VALUE_OFFSET); + } + + @Override + public int getChildDimensionId() throws IOException { + if (firstChildId == INVALID_ID) { + return INVALID_ID; + } else { + return in.readInt(firstChildId * SERIALIZABLE_DATA_SIZE_IN_BYTES); + } + } + + @Override + public int getStartDocId() throws IOException { + return getInt(START_DOC_ID_OFFSET); + } + + @Override + public int getEndDocId() throws IOException { + return getInt(END_DOC_ID_OFFSET); + } + + @Override + public int getAggregatedDocId() throws IOException { + return getInt(AGGREGATE_DOC_ID_OFFSET); + } + + @Override + public int getNumChildren() throws IOException { + if (firstChildId == INVALID_ID) { + return 0; + } else { + return getInt(LAST_CHILD_ID_OFFSET) - firstChildId + 1; + } + } + + @Override + public boolean isLeaf() { + return firstChildId == INVALID_ID; + } + + @Override + public boolean isStarNode() throws IOException { + return getByte(IS_STAR_NODE_OFFSET) != 0; + } + + @Override + public StarTreeNode getChildForDimensionValue(long dimensionValue) throws IOException { + // there will be no children for leaf nodes + if (isLeaf()) { + return null; + } + + // Specialize star node for performance + if (dimensionValue == ALL) { + return handleStarNode(); + } + + return binarySearchChild(dimensionValue); + } + + private OffHeapStarTreeNode handleStarNode() throws IOException { + OffHeapStarTreeNode firstNode = new OffHeapStarTreeNode(in, firstChildId); + if (firstNode.getDimensionValue() == ALL) { + return firstNode; + } else { + return null; + } + } + + private OffHeapStarTreeNode binarySearchChild(long dimensionValue) throws IOException { + // Binary search to find child node + int low = firstChildId; + int high = getInt(LAST_CHILD_ID_OFFSET); + + while (low <= high) { + int mid = low + (high - low) / 2; + OffHeapStarTreeNode midNode = new OffHeapStarTreeNode(in, mid); + long midNodeDimensionValue = midNode.getDimensionValue(); + + if (midNodeDimensionValue == dimensionValue) { + return midNode; + } else if (midNodeDimensionValue < dimensionValue) { + low = mid + 1; + } else { + high = mid - 1; + } + } + return null; + } + + @Override + public Iterator getChildrenIterator() throws IOException { + return new Iterator<>() { + private int currentChildId = firstChildId; + private final int lastChildId = getInt(LAST_CHILD_ID_OFFSET); + + @Override + public boolean hasNext() { + return currentChildId <= lastChildId; + } + + @Override + public OffHeapStarTreeNode next() { + try { + return new OffHeapStarTreeNode(in, currentChildId++); + } catch (IOException e) { + throw new RuntimeException(e); + } + } + + @Override + public void remove() { + throw new UnsupportedOperationException(); + } + }; + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTree.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTree.java new file mode 100644 index 0000000000000..e21dc225a4c8f --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTree.java @@ -0,0 +1,23 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.node; + +/** + * Interface for star-tree. + * + * @opensearch.experimental + */ +public interface StarTree { + + /** + * Fetches the root node of the star-tree. + * @return the root of the star-tree + */ + StarTreeNode getRoot(); + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTreeNode.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTreeNode.java new file mode 100644 index 0000000000000..59522ffa4be89 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/StarTreeNode.java @@ -0,0 +1,112 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.node; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.io.IOException; +import java.util.Iterator; + +/** + * Interface that represents star tree node + * + * @opensearch.experimental + */ +@ExperimentalApi +public interface StarTreeNode { + long ALL = -1l; + + /** + * Returns the dimension ID of the current star-tree node. + * + * @return the dimension ID + * @throws IOException if an I/O error occurs while reading the dimension ID + */ + int getDimensionId() throws IOException; + + /** + * Returns the dimension value of the current star-tree node. + * + * @return the dimension value + * @throws IOException if an I/O error occurs while reading the dimension value + */ + long getDimensionValue() throws IOException; + + /** + * Returns the dimension ID of the child star-tree node. + * + * @return the child dimension ID + * @throws IOException if an I/O error occurs while reading the child dimension ID + */ + int getChildDimensionId() throws IOException; + + /** + * Returns the start document ID of the current star-tree node. + * + * @return the start document ID + * @throws IOException if an I/O error occurs while reading the start document ID + */ + int getStartDocId() throws IOException; + + /** + * Returns the end document ID of the current star-tree node. + * + * @return the end document ID + * @throws IOException if an I/O error occurs while reading the end document ID + */ + int getEndDocId() throws IOException; + + /** + * Returns the aggregated document ID of the current star-tree node. + * + * @return the aggregated document ID + * @throws IOException if an I/O error occurs while reading the aggregated document ID + */ + int getAggregatedDocId() throws IOException; + + /** + * Returns the number of children of the current star-tree node. + * + * @return the number of children + * @throws IOException if an I/O error occurs while reading the number of children + */ + int getNumChildren() throws IOException; + + /** + * Checks if the current node is a leaf star-tree node. + * + * @return true if the node is a leaf node, false otherwise + */ + boolean isLeaf(); + + /** + * Checks if the current node is a star node. + * + * @return true if the node is a star node, false otherwise + * @throws IOException if an I/O error occurs while reading the star node status + */ + boolean isStarNode() throws IOException; + + /** + * Returns the child star-tree node for the given dimension value. + * + * @param dimensionValue the dimension value + * @return the child node for the given dimension value or null if child is not present + * @throws IOException if an I/O error occurs while retrieving the child node + */ + StarTreeNode getChildForDimensionValue(long dimensionValue) throws IOException; + + /** + * Returns an iterator over the children of the current star-tree node. + * + * @return an iterator over the children + * @throws IOException if an I/O error occurs while retrieving the children iterator + */ + Iterator getChildrenIterator() throws IOException; +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/package-info.java new file mode 100644 index 0000000000000..19d12bc6318d7 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/node/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Holds classes associated with star tree node + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.node; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java new file mode 100644 index 0000000000000..6d6cb420f4a9e --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/package-info.java @@ -0,0 +1,13 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +/** + * Core classes for handling star tree index. + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java new file mode 100644 index 0000000000000..0a9625b151183 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/SequentialDocValuesIterator.java @@ -0,0 +1,160 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.utils; + +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.search.DocIdSetIterator; +import org.opensearch.common.annotation.ExperimentalApi; + +import java.io.IOException; + +/** + * Coordinates the reading of documents across multiple DocIdSetIterators. + * It encapsulates a single DocIdSetIterator and maintains the latest document ID and its associated value. + * + * @opensearch.experimental + */ +@ExperimentalApi +public class SequentialDocValuesIterator { + + /** + * The doc id set iterator associated for each field. + */ + private final DocIdSetIterator docIdSetIterator; + + /** + * The value associated with the latest document. + */ + private Long docValue; + + /** + * The id of the latest document. + */ + private int docId = -1; + + /** + * Constructs a new SequentialDocValuesIterator instance with the given DocIdSetIterator. + * + * @param docIdSetIterator the DocIdSetIterator to be associated with this instance + */ + public SequentialDocValuesIterator(DocIdSetIterator docIdSetIterator) { + this.docIdSetIterator = docIdSetIterator; + } + + /** + * Creates a SequentialDocValuesIterator with an empty DocIdSetIterator. + * + */ + public SequentialDocValuesIterator() { + this.docIdSetIterator = new DocIdSetIterator() { + @Override + public int docID() { + return NO_MORE_DOCS; + } + + @Override + public int nextDoc() { + return NO_MORE_DOCS; + } + + @Override + public int advance(int target) { + return NO_MORE_DOCS; + } + + @Override + public long cost() { + return 0; + } + }; + } + + /** + * Returns the value associated with the latest document. + * + * @return the value associated with the latest document + */ + public Long getDocValue() { + return docValue; + } + + /** + * Sets the value associated with the latest document. + * + * @param docValue the value to be associated with the latest document + */ + public void setDocValue(Long docValue) { + this.docValue = docValue; + } + + /** + * Returns the id of the latest document. + * + * @return the id of the latest document + */ + public int getDocId() { + return docId; + } + + /** + * Sets the id of the latest document. + * + * @param docId the ID of the latest document + */ + public void setDocId(int docId) { + this.docId = docId; + } + + /** + * Returns the DocIdSetIterator associated with this instance. + * + * @return the DocIdSetIterator associated with this instance + */ + public DocIdSetIterator getDocIdSetIterator() { + return docIdSetIterator; + } + + public int nextDoc(int currentDocId) throws IOException { + // if doc id stored is less than or equal to the requested doc id , return the stored doc id + if (docId >= currentDocId) { + return docId; + } + setDocId(this.docIdSetIterator.nextDoc()); + return docId; + } + + public Long value(int currentDocId) throws IOException { + if (this.getDocIdSetIterator() instanceof SortedNumericDocValues) { + SortedNumericDocValues sortedNumericDocValues = (SortedNumericDocValues) this.getDocIdSetIterator(); + if (currentDocId < 0) { + throw new IllegalStateException("invalid doc id to fetch the next value"); + } + if (currentDocId == DocIdSetIterator.NO_MORE_DOCS) { + throw new IllegalStateException("DocValuesIterator is already exhausted"); + } + + if (docId == DocIdSetIterator.NO_MORE_DOCS) { + return null; + } + + if (docValue == null) { + setDocValue(sortedNumericDocValues.nextValue()); + } + if (docId == currentDocId) { + Long nextValue = docValue; + docValue = null; + return nextValue; + } else { + return null; + } + } else { + throw new IllegalStateException("Unsupported Iterator requested for SequentialDocValuesIterator"); + } + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeBuilderUtils.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeBuilderUtils.java new file mode 100644 index 0000000000000..4e12f8e60d449 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeBuilderUtils.java @@ -0,0 +1,105 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ +package org.opensearch.index.compositeindex.datacube.startree.utils; + +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.store.IndexOutput; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricAggregatorInfo; +import org.opensearch.index.mapper.CompositeMappedFieldType; + +import java.io.IOException; +import java.util.List; +import java.util.Map; + +/** + * Util class for building star tree + * + * @opensearch.experimental + */ +public class StarTreeBuilderUtils { + + private StarTreeBuilderUtils() {} + + public static final int ALL = -1; + + /** + * Represents a node in a tree data structure, specifically designed for a star-tree implementation. + * A star-tree node will represent both star and non-star nodes. + */ + @ExperimentalApi + public static class TreeNode { + + /** + * The dimension id for the dimension (field) associated with this star-tree node. + */ + public int dimensionId = ALL; + + /** + * The starting document id (inclusive) associated with this star-tree node. + */ + public int startDocId = ALL; + + /** + * The ending document id (exclusive) associated with this star-tree node. + */ + public int endDocId = ALL; + + /** + * The aggregated document id associated with this star-tree node. + */ + public int aggregatedDocId = ALL; + + /** + * The child dimension identifier associated with this star-tree node. + */ + public int childDimensionId = ALL; + + /** + * The value of the dimension associated with this star-tree node. + */ + public long dimensionValue = ALL; + + /** + * A flag indicating whether this node is a star node (a node that represents an aggregation of all dimensions). + */ + public boolean isStarNode = false; + + /** + * A map containing the child nodes of this star-tree node, keyed by their dimension id. + */ + public Map children; + } + + public static long serializeStarTree(IndexOutput dataOut, TreeNode rootNode, int numNodes) throws IOException { + return StarTreeDataSerializer.serializeStarTree(dataOut, rootNode, numNodes); + } + + public static void serializeStarTreeMetadata( + IndexOutput metaOut, + StarTreeField starTreeField, + SegmentWriteState writeState, + List metricAggregatorInfos, + Integer segmentAggregatedCount, + long dataFilePointer, + long dataFileLength + ) throws IOException { + StarTreeMetaSerializer.serializeStarTreeMetadata( + metaOut, + CompositeMappedFieldType.CompositeFieldType.STAR_TREE, + starTreeField, + writeState, + metricAggregatorInfos, + segmentAggregatedCount, + dataFilePointer, + dataFileLength + ); + } + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeDataSerializer.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeDataSerializer.java new file mode 100644 index 0000000000000..54059458a6d0d --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeDataSerializer.java @@ -0,0 +1,138 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.utils; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.store.IndexOutput; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Comparator; +import java.util.LinkedList; +import java.util.List; +import java.util.Queue; + +import static org.opensearch.index.compositeindex.CompositeIndexConstants.MAGIC_MARKER; +import static org.opensearch.index.compositeindex.CompositeIndexConstants.VERSION; +import static org.opensearch.index.compositeindex.datacube.startree.node.OffHeapStarTreeNode.SERIALIZABLE_DATA_SIZE_IN_BYTES; +import static org.opensearch.index.compositeindex.datacube.startree.utils.StarTreeBuilderUtils.ALL; + +/** + * Utility class for serializing a star-tree data structure. + * + * @opensearch.experimental + */ +public class StarTreeDataSerializer { + + private static final Logger logger = LogManager.getLogger(StarTreeDataSerializer.class); + + /** + * Serializes the star-tree data structure. + * + * @param indexOutput the IndexOutput to write the star-tree data + * @param rootNode the root node of the star-tree + * @param numNodes the total number of nodes in the star-tree + * @return the total size in bytes of the serialized star-tree data + * @throws IOException if an I/O error occurs while writing the star-tree data + */ + public static long serializeStarTree(IndexOutput indexOutput, StarTreeBuilderUtils.TreeNode rootNode, int numNodes) throws IOException { + int headerSizeInBytes = computeStarTreeDataHeaderByteSize(); + long totalSizeInBytes = headerSizeInBytes + (long) numNodes * SERIALIZABLE_DATA_SIZE_IN_BYTES; + + logger.info("Star tree size in bytes : {}", totalSizeInBytes); + + writeStarTreeHeader(indexOutput, numNodes); + writeStarTreeNodes(indexOutput, rootNode); + return totalSizeInBytes; + } + + /** + * Computes the byte size of the star-tree data header. + * + * @return the byte size of the star-tree data header + */ + private static int computeStarTreeDataHeaderByteSize() { + // Magic marker (8), version (4) + int headerSizeInBytes = 12; + + // For number of nodes. + headerSizeInBytes += Integer.BYTES; + return headerSizeInBytes; + } + + /** + * Writes the star-tree data header. + * + * @param output the IndexOutput to write the header + * @param numNodes the total number of nodes in the star-tree + * @throws IOException if an I/O error occurs while writing the header + */ + private static void writeStarTreeHeader(IndexOutput output, int numNodes) throws IOException { + output.writeLong(MAGIC_MARKER); + output.writeInt(VERSION); + output.writeInt(numNodes); + } + + /** + * Writes the star-tree nodes in a breadth-first order. + * + * @param output the IndexOutput to write the nodes + * @param rootNode the root node of the star-tree + * @throws IOException if an I/O error occurs while writing the nodes + */ + private static void writeStarTreeNodes(IndexOutput output, StarTreeBuilderUtils.TreeNode rootNode) throws IOException { + Queue queue = new LinkedList<>(); + queue.add(rootNode); + + int currentNodeId = 0; + while (!queue.isEmpty()) { + StarTreeBuilderUtils.TreeNode node = queue.remove(); + + if (node.children == null) { + writeStarTreeNode(output, node, ALL, ALL); + } else { + + // Sort all children nodes based on dimension value + List sortedChildren = new ArrayList<>(node.children.values()); + sortedChildren.sort(Comparator.comparingLong(o -> o.dimensionValue)); + + int firstChildId = currentNodeId + queue.size() + 1; + int lastChildId = firstChildId + sortedChildren.size() - 1; + writeStarTreeNode(output, node, firstChildId, lastChildId); + + queue.addAll(sortedChildren); + } + + currentNodeId++; + } + } + + /** + * Writes a single star-tree node + * + * @param output the IndexOutput to write the node + * @param node the star tree node to write + * @param firstChildId the ID of the first child node + * @param lastChildId the ID of the last child node + * @throws IOException if an I/O error occurs while writing the node + */ + private static void writeStarTreeNode(IndexOutput output, StarTreeBuilderUtils.TreeNode node, int firstChildId, int lastChildId) + throws IOException { + output.writeInt(node.dimensionId); + output.writeLong(node.dimensionValue); + output.writeInt(node.startDocId); + output.writeInt(node.endDocId); + output.writeInt(node.aggregatedDocId); + output.writeByte(node.isStarNode == false ? (byte) 0 : (byte) 1); + output.writeInt(firstChildId); + output.writeInt(lastChildId); + } + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeHelper.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeHelper.java new file mode 100644 index 0000000000000..4192d6deff99e --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeHelper.java @@ -0,0 +1,38 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.utils; + +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricAggregatorInfo; + +/** + * This class contains helper methods used throughout the Star Tree index implementation. + * + * @opensearch.experimental + */ +public class StarTreeHelper { + + /** + * The suffix appended to dimension field names in the Star Tree index. + */ + public static final String DIMENSION_SUFFIX = "_dim"; + + /** + * The suffix appended to metric field names in the Star Tree index. + */ + public static final String METRIC_SUFFIX = "_metric"; + + public static String fullFieldNameForStarTreeDimensionsDocValues(String starTreeFieldName, String dimensionName) { + return starTreeFieldName + "_" + dimensionName + "_" + DIMENSION_SUFFIX; + } + + public static String fullFieldNameForStarTreeMetricsDocValues(String name, String fieldName, String metricName) { + return MetricAggregatorInfo.toFieldName(name, fieldName, metricName) + "_" + METRIC_SUFFIX; + } + +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeMetaSerializer.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeMetaSerializer.java new file mode 100644 index 0000000000000..6f4b15611a03a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/StarTreeMetaSerializer.java @@ -0,0 +1,223 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.utils; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.store.IndexOutput; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricAggregatorInfo; +import org.opensearch.index.mapper.CompositeMappedFieldType; + +import java.io.IOException; +import java.util.List; + +import static java.nio.charset.StandardCharsets.UTF_8; +import static org.opensearch.index.compositeindex.CompositeIndexConstants.MAGIC_MARKER; +import static org.opensearch.index.compositeindex.CompositeIndexConstants.VERSION; + +/** + * The utility class for serializing the metadata of a star-tree data structure. + * The metadata includes information about the dimensions, metrics, and other relevant details + * related to the star tree. + * + * @opensearch.experimental + */ +public class StarTreeMetaSerializer { + + private static final Logger logger = LogManager.getLogger(StarTreeMetaSerializer.class); + + /** + * Serializes the star-tree metadata. + * + * @param metaOut the IndexOutput to write the metadata + * @param compositeFieldType the composite field type of the star-tree field + * @param starTreeField the star-tree field + * @param writeState the segment write state + * @param metricAggregatorInfos the list of metric aggregator information + * @param segmentAggregatedCount the aggregated document count for the segment + * @param dataFilePointer the file pointer to the start of the star tree data + * @param dataFileLength the length of the star tree data file + * @throws IOException if an I/O error occurs while serializing the metadata + */ + public static void serializeStarTreeMetadata( + IndexOutput metaOut, + CompositeMappedFieldType.CompositeFieldType compositeFieldType, + StarTreeField starTreeField, + SegmentWriteState writeState, + List metricAggregatorInfos, + Integer segmentAggregatedCount, + long dataFilePointer, + long dataFileLength + ) throws IOException { + long totalSizeInBytes = 0; + + // header size + totalSizeInBytes += computeHeaderByteSize(compositeFieldType, starTreeField.getName()); + // number of dimensions + totalSizeInBytes += Integer.BYTES; + // dimension field numbers + totalSizeInBytes += (long) starTreeField.getDimensionsOrder().size() * Integer.BYTES; + // metric count + totalSizeInBytes += Integer.BYTES; + // metric - metric stat pair + totalSizeInBytes += computeMetricEntriesSizeInBytes(metricAggregatorInfos); + // segment aggregated document count + totalSizeInBytes += Integer.BYTES; + // max leaf docs + totalSizeInBytes += Integer.BYTES; + // skip star node creation dimensions count + totalSizeInBytes += Integer.BYTES; + // skip star node creation dimensions field numbers + totalSizeInBytes += (long) starTreeField.getStarTreeConfig().getSkipStarNodeCreationInDims().size() * Integer.BYTES; + // data start file pointer + totalSizeInBytes += Long.BYTES; + // data length + totalSizeInBytes += Long.BYTES; + + logger.info("Star tree size in bytes : {}", totalSizeInBytes); + + writeMetaHeader(metaOut, compositeFieldType, starTreeField.getName()); + writeMeta(metaOut, writeState, metricAggregatorInfos, starTreeField, segmentAggregatedCount, dataFilePointer, dataFileLength); + } + + /** + * Computes the byte size required to store the star-tree metric entry. + * + * @param metricAggregatorInfos the list of metric aggregator information + * @return the byte size required to store the metric-metric stat pairs + */ + private static long computeMetricEntriesSizeInBytes(List metricAggregatorInfos) { + + long totalMetricEntriesSize = 0; + + for (MetricAggregatorInfo metricAggregatorInfo : metricAggregatorInfos) { + totalMetricEntriesSize += metricAggregatorInfo.getField().getBytes(UTF_8).length; + totalMetricEntriesSize += metricAggregatorInfo.getMetricStat().getTypeName().getBytes(UTF_8).length; + } + + return totalMetricEntriesSize; + } + + /** + * Computes the byte size of the star-tree metadata header. + * + * @param compositeFieldType the composite field type of the star-tree field + * @param starTreeFieldName the name of the star-tree field + * @return the byte size of the star-tree metadata header + */ + private static int computeHeaderByteSize(CompositeMappedFieldType.CompositeFieldType compositeFieldType, String starTreeFieldName) { + // Magic marker (8), version (4), size of header (4) + int headerSizeInBytes = 16; + + // For star-tree field name + headerSizeInBytes += starTreeFieldName.getBytes(UTF_8).length; + + // For star tree field type + headerSizeInBytes += compositeFieldType.getName().getBytes(UTF_8).length; + + return headerSizeInBytes; + } + + /** + * Writes the star-tree metadata header. + * + * @param metaOut the IndexOutput to write the header + * @param compositeFieldType the composite field type of the star-tree field + * @param starTreeFieldName the name of the star-tree field + * @throws IOException if an I/O error occurs while writing the header + */ + private static void writeMetaHeader( + IndexOutput metaOut, + CompositeMappedFieldType.CompositeFieldType compositeFieldType, + String starTreeFieldName + ) throws IOException { + // magic marker for sanity + metaOut.writeLong(MAGIC_MARKER); + + // version + metaOut.writeInt(VERSION); + + // star tree field name + metaOut.writeString(starTreeFieldName); + + // star tree field type + metaOut.writeString(compositeFieldType.getName()); + } + + /** + * Writes the star-tree metadata. + * + * @param metaOut the IndexOutput to write the metadata + * @param writeState the segment write state + * @param metricAggregatorInfos the list of metric aggregator information + * @param starTreeField the star tree field + * @param segmentAggregatedDocCount the aggregated document count for the segment + * @param dataFilePointer the file pointer to the start of the star-tree data + * @param dataFileLength the length of the star-tree data file + * @throws IOException if an I/O error occurs while writing the metadata + */ + private static void writeMeta( + IndexOutput metaOut, + SegmentWriteState writeState, + List metricAggregatorInfos, + StarTreeField starTreeField, + Integer segmentAggregatedDocCount, + long dataFilePointer, + long dataFileLength + ) throws IOException { + + // number of dimensions + metaOut.writeInt(starTreeField.getDimensionsOrder().size()); + + // dimensions + for (Dimension dimension : starTreeField.getDimensionsOrder()) { + int dimensionFieldNumber = writeState.fieldInfos.fieldInfo(dimension.getField()).getFieldNumber(); + metaOut.writeInt(dimensionFieldNumber); + } + + // number of metrics + metaOut.writeInt(metricAggregatorInfos.size()); + + // metric - metric stat pair + for (MetricAggregatorInfo metricAggregatorInfo : metricAggregatorInfos) { + String metricName = metricAggregatorInfo.getField(); + String metricStatName = metricAggregatorInfo.getMetricStat().getTypeName(); + metaOut.writeString(metricName); + metaOut.writeString(metricStatName); + } + + // segment aggregated document count + metaOut.writeInt(segmentAggregatedDocCount); + + // max leaf docs + metaOut.writeInt(starTreeField.getStarTreeConfig().maxLeafDocs()); + + // number of skip star node creation dimensions + metaOut.writeInt(starTreeField.getStarTreeConfig().maxLeafDocs()); + + // skip star node creations + for (String dimension : starTreeField.getStarTreeConfig().getSkipStarNodeCreationInDims()) { + int dimensionFieldNumber = writeState.fieldInfos.fieldInfo(dimension).getFieldNumber(); + metaOut.writeInt(dimensionFieldNumber); + } + + // star tree build-mode + metaOut.writeString(starTreeField.getStarTreeConfig().getBuildMode().getTypeName()); + + // star-tree data file pointer + metaOut.writeLong(dataFilePointer); + + // star-tree data file length + metaOut.writeLong(dataFileLength); + + } +} diff --git a/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/package-info.java new file mode 100644 index 0000000000000..c7e8b04d42178 --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/datacube/startree/utils/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Utility to support Composite Index Star Tree + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex.datacube.startree.utils; diff --git a/server/src/main/java/org/opensearch/index/compositeindex/package-info.java b/server/src/main/java/org/opensearch/index/compositeindex/package-info.java new file mode 100644 index 0000000000000..9a88f88d9850a --- /dev/null +++ b/server/src/main/java/org/opensearch/index/compositeindex/package-info.java @@ -0,0 +1,14 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +/** + * Core classes for handling composite indices. + * + * @opensearch.experimental + */ +package org.opensearch.index.compositeindex; diff --git a/server/src/main/java/org/opensearch/index/get/GetResult.java b/server/src/main/java/org/opensearch/index/get/GetResult.java index c0dd1cd2ecb30..27a2826f71e19 100644 --- a/server/src/main/java/org/opensearch/index/get/GetResult.java +++ b/server/src/main/java/org/opensearch/index/get/GetResult.java @@ -37,6 +37,7 @@ import org.opensearch.common.annotation.PublicApi; import org.opensearch.common.document.DocumentField; import org.opensearch.common.xcontent.XContentHelper; +import org.opensearch.core.common.ParsingException; import org.opensearch.core.common.Strings; import org.opensearch.core.common.bytes.BytesReference; import org.opensearch.core.common.io.stream.StreamInput; @@ -56,6 +57,7 @@ import java.util.Collections; import java.util.HashMap; import java.util.Iterator; +import java.util.Locale; import java.util.Map; import java.util.Objects; @@ -398,6 +400,14 @@ public static GetResult fromXContentEmbedded(XContentParser parser, String index } } } + + if (found == null) { + throw new ParsingException( + parser.getTokenLocation(), + String.format(Locale.ROOT, "Missing required field [%s]", GetResult.FOUND) + ); + } + return new GetResult(index, id, seqNo, primaryTerm, version, found, source, documentFields, metaFields); } diff --git a/server/src/main/java/org/opensearch/index/mapper/CompositeDataCubeFieldType.java b/server/src/main/java/org/opensearch/index/mapper/CompositeDataCubeFieldType.java new file mode 100644 index 0000000000000..baf6442f0c08c --- /dev/null +++ b/server/src/main/java/org/opensearch/index/mapper/CompositeDataCubeFieldType.java @@ -0,0 +1,56 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; + +import java.util.ArrayList; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +/** + * Base class for multi field data cube fields + * + * @opensearch.experimental + */ +@ExperimentalApi +public abstract class CompositeDataCubeFieldType extends CompositeMappedFieldType { + public static final String NAME = "name"; + public static final String TYPE = "type"; + private final List dimensions; + private final List metrics; + + public CompositeDataCubeFieldType(String name, List dims, List metrics, CompositeFieldType type) { + super(name, getFields(dims, metrics), type); + this.dimensions = dims; + this.metrics = metrics; + } + + private static List getFields(List dims, List metrics) { + Set fields = new HashSet<>(); + for (Dimension dim : dims) { + fields.add(dim.getField()); + } + for (Metric metric : metrics) { + fields.add(metric.getField()); + } + return new ArrayList<>(fields); + } + + public List getDimensions() { + return dimensions; + } + + public List getMetrics() { + return metrics; + } +} diff --git a/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java b/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java new file mode 100644 index 0000000000000..f4bdb19abd8cc --- /dev/null +++ b/server/src/main/java/org/opensearch/index/mapper/CompositeMappedFieldType.java @@ -0,0 +1,82 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.opensearch.common.annotation.ExperimentalApi; + +import java.util.Collections; +import java.util.List; +import java.util.Map; + +/** + * Base class for composite field types + * + * @opensearch.experimental + */ +@ExperimentalApi +public abstract class CompositeMappedFieldType extends MappedFieldType { + private final List fields; + private final CompositeFieldType type; + + public CompositeMappedFieldType( + String name, + boolean isIndexed, + boolean isStored, + boolean hasDocValues, + TextSearchInfo textSearchInfo, + Map meta, + List fields, + CompositeFieldType type + ) { + super(name, isIndexed, isStored, hasDocValues, textSearchInfo, meta); + this.fields = fields; + this.type = type; + } + + public CompositeMappedFieldType(String name, List fields, CompositeFieldType type) { + this(name, false, false, false, TextSearchInfo.NONE, Collections.emptyMap(), fields, type); + } + + /** + * Supported composite field types + * + * @opensearch.experimental + */ + @ExperimentalApi + public enum CompositeFieldType { + STAR_TREE("star_tree"); + + private final String name; + + CompositeFieldType(String name) { + this.name = name; + } + + public String getName() { + return name; + } + + public static CompositeFieldType fromName(String name) { + for (CompositeFieldType metric : CompositeFieldType.values()) { + if (metric.getName().equalsIgnoreCase(name)) { + return metric; + } + } + throw new IllegalArgumentException("Invalid composite field type: " + name); + } + } + + public CompositeFieldType getCompositeIndexType() { + return type; + } + + public List fields() { + return fields; + } +} diff --git a/server/src/main/java/org/opensearch/index/mapper/Mapper.java b/server/src/main/java/org/opensearch/index/mapper/Mapper.java index bd5d3f15c0706..46a5050d4fc18 100644 --- a/server/src/main/java/org/opensearch/index/mapper/Mapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/Mapper.java @@ -253,6 +253,11 @@ public boolean isWithinMultiField() { } Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException; + + default Mapper.Builder parse(String name, Map node, ParserContext parserContext, ObjectMapper.Builder objBuilder) + throws MapperParsingException { + throw new UnsupportedOperationException("should not be invoked"); + } } private final String simpleName; diff --git a/server/src/main/java/org/opensearch/index/mapper/MapperService.java b/server/src/main/java/org/opensearch/index/mapper/MapperService.java index a1f3894c9f14c..530a3092a5aa7 100644 --- a/server/src/main/java/org/opensearch/index/mapper/MapperService.java +++ b/server/src/main/java/org/opensearch/index/mapper/MapperService.java @@ -226,6 +226,8 @@ public enum MergeReason { private final BooleanSupplier idFieldDataEnabled; + private volatile Set compositeMappedFieldTypes; + public MapperService( IndexSettings indexSettings, IndexAnalyzers indexAnalyzers, @@ -542,6 +544,9 @@ private synchronized Map internalMerge(DocumentMapper ma } assert results.values().stream().allMatch(this::assertSerialization); + + // initialize composite fields post merge + this.compositeMappedFieldTypes = getCompositeFieldTypesFromMapper(); return results; } @@ -650,6 +655,27 @@ public Iterable fieldTypes() { return this.mapper == null ? Collections.emptySet() : this.mapper.fieldTypes(); } + public boolean isCompositeIndexPresent() { + return this.mapper != null && !getCompositeFieldTypes().isEmpty(); + } + + public Set getCompositeFieldTypes() { + return compositeMappedFieldTypes; + } + + private Set getCompositeFieldTypesFromMapper() { + Set compositeMappedFieldTypes = new HashSet<>(); + if (this.mapper == null) { + return Collections.emptySet(); + } + for (MappedFieldType type : this.mapper.fieldTypes()) { + if (type instanceof CompositeMappedFieldType) { + compositeMappedFieldTypes.add((CompositeMappedFieldType) type); + } + } + return compositeMappedFieldTypes; + } + public ObjectMapper getObjectMapper(String name) { return this.mapper == null ? null : this.mapper.objectMappers().get(name); } diff --git a/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java b/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java index 92ffdb60e6cde..be3adfe8b2c4e 100644 --- a/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/ObjectMapper.java @@ -42,9 +42,11 @@ import org.opensearch.common.collect.CopyOnWriteHashMap; import org.opensearch.common.logging.DeprecationLogger; import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; import org.opensearch.common.xcontent.support.XContentMapValues; import org.opensearch.core.xcontent.ToXContent; import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; import org.opensearch.index.mapper.MapperService.MergeReason; import java.io.IOException; @@ -176,6 +178,7 @@ public void setIncludeInRoot(boolean value) { * @opensearch.internal */ @SuppressWarnings("rawtypes") + @PublicApi(since = "1.0.0") public static class Builder extends Mapper.Builder { protected Explicit enabled = new Explicit<>(true, false); @@ -262,14 +265,25 @@ public static class TypeParser implements Mapper.TypeParser { public Mapper.Builder parse(String name, Map node, ParserContext parserContext) throws MapperParsingException { ObjectMapper.Builder builder = new Builder(name); parseNested(name, node, builder, parserContext); + Object compositeField = null; for (Iterator> iterator = node.entrySet().iterator(); iterator.hasNext();) { Map.Entry entry = iterator.next(); String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); - if (parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder)) { + if (fieldName.equals("composite")) { + compositeField = fieldNode; iterator.remove(); + } else { + if (parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder)) { + iterator.remove(); + } } } + // Important : Composite field is made up of 2 or more source fields of the index, so this must be called + // after parsing all other properties + if (compositeField != null) { + parseCompositeField(builder, (Map) compositeField, parserContext); + } return builder; } @@ -407,6 +421,96 @@ protected static void parseDerived(ObjectMapper.Builder objBuilder, Map compositeNode, + ParserContext parserContext + ) { + if (!FeatureFlags.isEnabled(FeatureFlags.STAR_TREE_INDEX_SETTING)) { + throw new IllegalArgumentException( + "star tree index is under an experimental feature and can be activated only by enabling " + + FeatureFlags.STAR_TREE_INDEX_SETTING.getKey() + + " feature flag in the JVM options" + ); + } + Iterator> iterator = compositeNode.entrySet().iterator(); + if (compositeNode.size() > StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING.get(parserContext.getSettings())) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "Composite fields cannot have more than [%s] fields", + StarTreeIndexSettings.STAR_TREE_MAX_FIELDS_SETTING.get(parserContext.getSettings()) + ) + ); + } + while (iterator.hasNext()) { + Map.Entry entry = iterator.next(); + String fieldName = entry.getKey(); + // Should accept empty arrays, as a work around for when the + // user can't provide an empty Map. (PHP for example) + boolean isEmptyList = entry.getValue() instanceof List && ((List) entry.getValue()).isEmpty(); + if (entry.getValue() instanceof Map) { + @SuppressWarnings("unchecked") + Map propNode = (Map) entry.getValue(); + String type; + Object typeNode = propNode.get("type"); + if (typeNode != null) { + type = typeNode.toString(); + } else { + // lets see if we can derive this... + throw new MapperParsingException("No type specified for field [" + fieldName + "]"); + } + Mapper.TypeParser typeParser = getSupportedCompositeTypeParser(type, parserContext); + if (typeParser == null) { + throw new MapperParsingException("No handler for type [" + type + "] declared on field [" + fieldName + "]"); + } + String[] fieldNameParts = fieldName.split("\\."); + // field name is just ".", which is invalid + if (fieldNameParts.length < 1) { + throw new MapperParsingException("Invalid field name " + fieldName); + } + String realFieldName = fieldNameParts[fieldNameParts.length - 1]; + Mapper.Builder fieldBuilder = typeParser.parse(realFieldName, propNode, parserContext, objBuilder); + for (int i = fieldNameParts.length - 2; i >= 0; --i) { + ObjectMapper.Builder intermediate = new ObjectMapper.Builder<>(fieldNameParts[i]); + intermediate.add(fieldBuilder); + fieldBuilder = intermediate; + } + objBuilder.add(fieldBuilder); + propNode.remove("type"); + DocumentMapperParser.checkNoRemainingFields(fieldName, propNode, parserContext.indexVersionCreated()); + iterator.remove(); + } else if (isEmptyList) { + iterator.remove(); + } else { + throw new MapperParsingException( + "Expected map for property [fields] on field [" + fieldName + "] but got a " + fieldName.getClass() + ); + } + } + + DocumentMapperParser.checkNoRemainingFields( + compositeNode, + parserContext.indexVersionCreated(), + "DocType mapping definition has unsupported parameters: " + ); + } + + private static Mapper.TypeParser getSupportedCompositeTypeParser(String type, ParserContext parserContext) { + switch (type) { + case StarTreeMapper.CONTENT_TYPE: + return parserContext.typeParser(type); + default: + throw new IllegalArgumentException( + String.format(Locale.ROOT, "Type [%s] isn't supported in composite field context.", type) + ); + } + } + protected static void parseProperties(ObjectMapper.Builder objBuilder, Map propsNode, ParserContext parserContext) { Iterator> iterator = propsNode.entrySet().iterator(); while (iterator.hasNext()) { diff --git a/server/src/main/java/org/opensearch/index/mapper/RootObjectMapper.java b/server/src/main/java/org/opensearch/index/mapper/RootObjectMapper.java index 9504e6eafc046..e06e5be4633f9 100644 --- a/server/src/main/java/org/opensearch/index/mapper/RootObjectMapper.java +++ b/server/src/main/java/org/opensearch/index/mapper/RootObjectMapper.java @@ -177,15 +177,26 @@ public Mapper.Builder parse(String name, Map node, ParserContext RootObjectMapper.Builder builder = new Builder(name); Iterator> iterator = node.entrySet().iterator(); + Object compositeField = null; while (iterator.hasNext()) { Map.Entry entry = iterator.next(); String fieldName = entry.getKey(); Object fieldNode = entry.getValue(); - if (parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder) - || processField(builder, fieldName, fieldNode, parserContext)) { + if (fieldName.equals("composite")) { + compositeField = fieldNode; iterator.remove(); + } else { + if (parseObjectOrDocumentTypeProperties(fieldName, fieldNode, parserContext, builder) + || processField(builder, fieldName, fieldNode, parserContext)) { + iterator.remove(); + } } } + // Important : Composite field is made up of 2 or more source properties of the index, so this must be called + // after parsing all other properties + if (compositeField != null) { + parseCompositeField(builder, (Map) compositeField, parserContext); + } return builder; } diff --git a/server/src/main/java/org/opensearch/index/mapper/StarTreeMapper.java b/server/src/main/java/org/opensearch/index/mapper/StarTreeMapper.java new file mode 100644 index 0000000000000..d2debe762e9be --- /dev/null +++ b/server/src/main/java/org/opensearch/index/mapper/StarTreeMapper.java @@ -0,0 +1,406 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.apache.lucene.search.Query; +import org.opensearch.common.annotation.ExperimentalApi; +import org.opensearch.common.xcontent.support.XContentMapValues; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.DimensionFactory; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeIndexSettings; +import org.opensearch.index.query.QueryShardContext; +import org.opensearch.search.lookup.SearchLookup; + +import java.util.ArrayList; +import java.util.LinkedHashSet; +import java.util.LinkedList; +import java.util.List; +import java.util.Locale; +import java.util.Map; +import java.util.Optional; +import java.util.Set; +import java.util.stream.Collectors; + +/** + * A field mapper for star tree fields + * + * @opensearch.experimental + */ +@ExperimentalApi +public class StarTreeMapper extends ParametrizedFieldMapper { + public static final String CONTENT_TYPE = "star_tree"; + public static final String CONFIG = "config"; + public static final String MAX_LEAF_DOCS = "max_leaf_docs"; + public static final String SKIP_STAR_NODE_IN_DIMS = "skip_star_node_creation_for_dimensions"; + public static final String BUILD_MODE = "build_mode"; + public static final String ORDERED_DIMENSIONS = "ordered_dimensions"; + public static final String METRICS = "metrics"; + public static final String STATS = "stats"; + + @Override + public ParametrizedFieldMapper.Builder getMergeBuilder() { + return new Builder(simpleName(), objBuilder).init(this); + + } + + /** + * Builder for the star tree field mapper + * + * @opensearch.internal + */ + public static class Builder extends ParametrizedFieldMapper.Builder { + private ObjectMapper.Builder objbuilder; + private static final Set> ALLOWED_DIMENSION_MAPPER_BUILDERS = Set.of( + NumberFieldMapper.Builder.class, + DateFieldMapper.Builder.class + ); + private static final Set> ALLOWED_METRIC_MAPPER_BUILDERS = Set.of(NumberFieldMapper.Builder.class); + + @SuppressWarnings("unchecked") + private final Parameter config = new Parameter<>(CONFIG, false, () -> null, (name, context, nodeObj) -> { + if (nodeObj instanceof Map) { + Map paramMap = (Map) nodeObj; + int maxLeafDocs = XContentMapValues.nodeIntegerValue( + paramMap.get(MAX_LEAF_DOCS), + StarTreeIndexSettings.STAR_TREE_DEFAULT_MAX_LEAF_DOCS.get(context.getSettings()) + ); + if (maxLeafDocs < 1) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "%s [%s] must be greater than 0", MAX_LEAF_DOCS, maxLeafDocs) + ); + } + paramMap.remove(MAX_LEAF_DOCS); + Set skipStarInDims = new LinkedHashSet<>( + List.of(XContentMapValues.nodeStringArrayValue(paramMap.getOrDefault(SKIP_STAR_NODE_IN_DIMS, new ArrayList()))) + ); + paramMap.remove(SKIP_STAR_NODE_IN_DIMS); + // TODO : change this to off heap once off heap gets implemented + StarTreeFieldConfiguration.StarTreeBuildMode buildMode = StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP; + + List dimensions = buildDimensions(name, paramMap, context); + paramMap.remove(ORDERED_DIMENSIONS); + List metrics = buildMetrics(name, paramMap, context); + paramMap.remove(METRICS); + paramMap.remove(CompositeDataCubeFieldType.NAME); + for (String dim : skipStarInDims) { + if (dimensions.stream().filter(d -> d.getField().equals(dim)).findAny().isEmpty()) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "[%s] in skip_star_node_creation_for_dimensions should be part of ordered_dimensions", + dim + ) + ); + } + } + StarTreeFieldConfiguration spec = new StarTreeFieldConfiguration(maxLeafDocs, skipStarInDims, buildMode); + DocumentMapperParser.checkNoRemainingFields( + paramMap, + context.indexVersionCreated(), + "Star tree mapping definition has unsupported parameters: " + ); + return new StarTreeField(this.name, dimensions, metrics, spec); + + } else { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "unable to parse config for star tree field [%s]", this.name) + ); + } + }, m -> toType(m).starTreeField); + + /** + * Build dimensions from mapping + */ + @SuppressWarnings("unchecked") + private List buildDimensions(String fieldName, Map map, Mapper.TypeParser.ParserContext context) { + Object dims = XContentMapValues.extractValue("ordered_dimensions", map); + if (dims == null) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "ordered_dimensions is required for star tree field [%s]", fieldName) + ); + } + List dimensions = new LinkedList<>(); + if (dims instanceof List) { + List dimList = (List) dims; + if (dimList.size() > context.getSettings() + .getAsInt( + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_SETTING.getKey(), + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_DEFAULT + )) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "ordered_dimensions cannot have more than %s dimensions for star tree field [%s]", + context.getSettings() + .getAsInt( + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_SETTING.getKey(), + StarTreeIndexSettings.STAR_TREE_MAX_DIMENSIONS_DEFAULT + ), + fieldName + ) + ); + } + if (dimList.size() < 2) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "Atleast two dimensions are required to build star tree index field [%s]", fieldName) + ); + } + for (Object dim : dimList) { + dimensions.add(getDimension(fieldName, dim, context)); + } + } else { + throw new MapperParsingException( + String.format(Locale.ROOT, "unable to parse ordered_dimensions for star tree field [%s]", fieldName) + ); + } + return dimensions; + } + + /** + * Get dimension based on mapping + */ + @SuppressWarnings("unchecked") + private Dimension getDimension(String fieldName, Object dimensionMapping, Mapper.TypeParser.ParserContext context) { + Dimension dimension; + Map dimensionMap = (Map) dimensionMapping; + String name = (String) XContentMapValues.extractValue(CompositeDataCubeFieldType.NAME, dimensionMap); + dimensionMap.remove(CompositeDataCubeFieldType.NAME); + if (this.objbuilder == null || this.objbuilder.mappersBuilders == null) { + String type = (String) XContentMapValues.extractValue(CompositeDataCubeFieldType.TYPE, dimensionMap); + dimensionMap.remove(CompositeDataCubeFieldType.TYPE); + if (type == null) { + throw new MapperParsingException( + String.format(Locale.ROOT, "unable to parse ordered_dimensions for star tree field [%s]", fieldName) + ); + } + return DimensionFactory.parseAndCreateDimension(name, type, dimensionMap, context); + } else { + Optional dimBuilder = findMapperBuilderByName(name, this.objbuilder.mappersBuilders); + if (dimBuilder.isEmpty()) { + throw new IllegalArgumentException(String.format(Locale.ROOT, "unknown dimension field [%s]", name)); + } + if (!isBuilderAllowedForDimension(dimBuilder.get())) { + throw new IllegalArgumentException( + String.format( + Locale.ROOT, + "unsupported field type associated with dimension [%s] as part of star tree field [%s]", + name, + fieldName + ) + ); + } + dimension = DimensionFactory.parseAndCreateDimension(name, dimBuilder.get(), dimensionMap, context); + } + DocumentMapperParser.checkNoRemainingFields( + dimensionMap, + context.indexVersionCreated(), + "Star tree mapping definition has unsupported parameters: " + ); + return dimension; + } + + /** + * Build metrics from mapping + */ + @SuppressWarnings("unchecked") + private List buildMetrics(String fieldName, Map map, Mapper.TypeParser.ParserContext context) { + List metrics = new LinkedList<>(); + Object metricsFromInput = XContentMapValues.extractValue(METRICS, map); + if (metricsFromInput == null) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "metrics section is required for star tree field [%s]", fieldName) + ); + } + if (metricsFromInput instanceof List) { + List metricsList = (List) metricsFromInput; + for (Object metric : metricsList) { + Map metricMap = (Map) metric; + String name = (String) XContentMapValues.extractValue(CompositeDataCubeFieldType.NAME, metricMap); + metricMap.remove(CompositeDataCubeFieldType.NAME); + if (objbuilder == null || objbuilder.mappersBuilders == null) { + metrics.add(getMetric(name, metricMap, context)); + } else { + Optional meticBuilder = findMapperBuilderByName(name, this.objbuilder.mappersBuilders); + if (meticBuilder.isEmpty()) { + throw new IllegalArgumentException(String.format(Locale.ROOT, "unknown metric field [%s]", name)); + } + if (!isBuilderAllowedForMetric(meticBuilder.get())) { + throw new IllegalArgumentException( + String.format(Locale.ROOT, "non-numeric field type is associated with star tree metric [%s]", this.name) + ); + } + metrics.add(getMetric(name, metricMap, context)); + DocumentMapperParser.checkNoRemainingFields( + metricMap, + context.indexVersionCreated(), + "Star tree mapping definition has unsupported parameters: " + ); + } + } + } else { + throw new MapperParsingException(String.format(Locale.ROOT, "unable to parse metrics for star tree field [%s]", this.name)); + } + + return metrics; + } + + @SuppressWarnings("unchecked") + private Metric getMetric(String name, Map metric, Mapper.TypeParser.ParserContext context) { + List metricTypes; + List metricStrings = XContentMapValues.extractRawValues(STATS, metric) + .stream() + .map(Object::toString) + .collect(Collectors.toList()); + metric.remove(STATS); + if (metricStrings.isEmpty()) { + metricTypes = new ArrayList<>(StarTreeIndexSettings.DEFAULT_METRICS_LIST.get(context.getSettings())); + } else { + Set metricSet = new LinkedHashSet<>(); + for (String metricString : metricStrings) { + metricSet.add(MetricStat.fromTypeName(metricString)); + } + metricTypes = new ArrayList<>(metricSet); + } + return new Metric(name, metricTypes); + } + + @Override + protected List> getParameters() { + return List.of(config); + } + + private static boolean isBuilderAllowedForDimension(Mapper.Builder builder) { + return ALLOWED_DIMENSION_MAPPER_BUILDERS.stream().anyMatch(allowedType -> allowedType.isInstance(builder)); + } + + private static boolean isBuilderAllowedForMetric(Mapper.Builder builder) { + return ALLOWED_METRIC_MAPPER_BUILDERS.stream().anyMatch(allowedType -> allowedType.isInstance(builder)); + } + + private Optional findMapperBuilderByName(String field, List mappersBuilders) { + return mappersBuilders.stream().filter(builder -> builder.name().equals(field)).findFirst(); + } + + public Builder(String name, ObjectMapper.Builder objBuilder) { + super(name); + this.objbuilder = objBuilder; + } + + @Override + public ParametrizedFieldMapper build(BuilderContext context) { + StarTreeFieldType type = new StarTreeFieldType(name, this.config.get()); + return new StarTreeMapper(name, type, this, objbuilder); + } + } + + private static StarTreeMapper toType(FieldMapper in) { + return (StarTreeMapper) in; + } + + /** + * Concrete parse for star tree type + * + * @opensearch.internal + */ + public static class TypeParser implements Mapper.TypeParser { + + /** + * default constructor of VectorFieldMapper.TypeParser + */ + public TypeParser() {} + + @Override + public Mapper.Builder parse(String name, Map node, ParserContext context) throws MapperParsingException { + Builder builder = new StarTreeMapper.Builder(name, null); + builder.parse(name, context, node); + return builder; + } + + @Override + public Mapper.Builder parse(String name, Map node, ParserContext context, ObjectMapper.Builder objBuilder) + throws MapperParsingException { + Builder builder = new StarTreeMapper.Builder(name, objBuilder); + builder.parse(name, context, node); + return builder; + } + } + + private final StarTreeField starTreeField; + + private final ObjectMapper.Builder objBuilder; + + protected StarTreeMapper(String simpleName, StarTreeFieldType type, Builder builder, ObjectMapper.Builder objbuilder) { + super(simpleName, type, MultiFields.empty(), CopyTo.empty()); + this.starTreeField = builder.config.get(); + this.objBuilder = objbuilder; + } + + @Override + public StarTreeFieldType fieldType() { + return (StarTreeFieldType) super.fieldType(); + } + + @Override + protected String contentType() { + return CONTENT_TYPE; + } + + @Override + protected void parseCreateField(ParseContext context) { + throw new MapperParsingException( + String.format( + Locale.ROOT, + "Field [%s] is a star tree field and cannot be added inside a document. Use the index API request parameters.", + name() + ) + ); + } + + /** + * Star tree mapped field type containing dimensions, metrics, star tree specs + * + * @opensearch.experimental + */ + @ExperimentalApi + public static final class StarTreeFieldType extends CompositeDataCubeFieldType { + + private final StarTreeFieldConfiguration starTreeConfig; + + public StarTreeFieldType(String name, StarTreeField starTreeField) { + super(name, starTreeField.getDimensionsOrder(), starTreeField.getMetrics(), CompositeFieldType.STAR_TREE); + this.starTreeConfig = starTreeField.getStarTreeConfig(); + } + + public StarTreeFieldConfiguration getStarTreeConfig() { + return starTreeConfig; + } + + @Override + public ValueFetcher valueFetcher(QueryShardContext context, SearchLookup searchLookup, String format) { + // TODO : evaluate later + throw new UnsupportedOperationException("Cannot fetch values for star tree field [" + name() + "]."); + } + + @Override + public String typeName() { + return CONTENT_TYPE; + } + + @Override + public Query termQuery(Object value, QueryShardContext context) { + // TODO : evaluate later + throw new UnsupportedOperationException("Cannot perform terms query on star tree field [" + name() + "]."); + } + } + +} diff --git a/server/src/main/java/org/opensearch/index/store/remote/utils/cache/SegmentedCache.java b/server/src/main/java/org/opensearch/index/store/remote/utils/cache/SegmentedCache.java index 2ea7ea8dbee12..9ff6ddb1fb667 100644 --- a/server/src/main/java/org/opensearch/index/store/remote/utils/cache/SegmentedCache.java +++ b/server/src/main/java/org/opensearch/index/store/remote/utils/cache/SegmentedCache.java @@ -52,15 +52,15 @@ private static final int ceilingNextPowerOfTwo(int x) { private final Weigher weigher; public SegmentedCache(Builder builder) { - this.capacity = builder.capacity; final int segments = ceilingNextPowerOfTwo(builder.concurrencyLevel); this.segmentMask = segments - 1; this.table = newSegmentArray(segments); - this.perSegmentCapacity = (capacity + (segments - 1)) / segments; + this.perSegmentCapacity = (builder.capacity + (segments - 1)) / segments; this.weigher = builder.weigher; for (int i = 0; i < table.length; i++) { table[i] = new LRUCache<>(perSegmentCapacity, builder.listener, builder.weigher); } + this.capacity = perSegmentCapacity * segments; } @SuppressWarnings("unchecked") diff --git a/server/src/main/java/org/opensearch/indices/IndicesModule.java b/server/src/main/java/org/opensearch/indices/IndicesModule.java index 033b163bb0d67..f7e52ce9fc1ae 100644 --- a/server/src/main/java/org/opensearch/indices/IndicesModule.java +++ b/server/src/main/java/org/opensearch/indices/IndicesModule.java @@ -70,6 +70,7 @@ import org.opensearch.index.mapper.RoutingFieldMapper; import org.opensearch.index.mapper.SeqNoFieldMapper; import org.opensearch.index.mapper.SourceFieldMapper; +import org.opensearch.index.mapper.StarTreeMapper; import org.opensearch.index.mapper.TextFieldMapper; import org.opensearch.index.mapper.VersionFieldMapper; import org.opensearch.index.mapper.WildcardFieldMapper; @@ -174,6 +175,7 @@ public static Map getMappers(List mappe mappers.put(ConstantKeywordFieldMapper.CONTENT_TYPE, new ConstantKeywordFieldMapper.TypeParser()); mappers.put(DerivedFieldMapper.CONTENT_TYPE, DerivedFieldMapper.PARSER); mappers.put(WildcardFieldMapper.CONTENT_TYPE, WildcardFieldMapper.PARSER); + mappers.put(StarTreeMapper.CONTENT_TYPE, new StarTreeMapper.TypeParser()); for (MapperPlugin mapperPlugin : mapperPlugins) { for (Map.Entry entry : mapperPlugin.getMappers().entrySet()) { diff --git a/server/src/main/java/org/opensearch/indices/IndicesService.java b/server/src/main/java/org/opensearch/indices/IndicesService.java index a7d879fc06981..902ca95643625 100644 --- a/server/src/main/java/org/opensearch/indices/IndicesService.java +++ b/server/src/main/java/org/opensearch/indices/IndicesService.java @@ -106,6 +106,7 @@ import org.opensearch.index.IndexSettings; import org.opensearch.index.analysis.AnalysisRegistry; import org.opensearch.index.cache.request.ShardRequestCache; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.engine.CommitStats; import org.opensearch.index.engine.EngineConfig; import org.opensearch.index.engine.EngineConfigFactory; @@ -356,6 +357,7 @@ public class IndicesService extends AbstractLifecycleComponent private volatile TimeValue clusterDefaultRefreshInterval; private final SearchRequestStats searchRequestStats; private final FileCache fileCache; + private final CompositeIndexSettings compositeIndexSettings; @Override protected void doStart() { @@ -391,7 +393,8 @@ public IndicesService( RecoverySettings recoverySettings, CacheService cacheService, RemoteStoreSettings remoteStoreSettings, - FileCache fileCache + FileCache fileCache, + CompositeIndexSettings compositeIndexSettings ) { this.settings = settings; this.threadPool = threadPool; @@ -498,6 +501,7 @@ protected void closeInternal() { .addSettingsUpdateConsumer(CLUSTER_DEFAULT_INDEX_REFRESH_INTERVAL_SETTING, this::onRefreshIntervalUpdate); this.recoverySettings = recoverySettings; this.remoteStoreSettings = remoteStoreSettings; + this.compositeIndexSettings = compositeIndexSettings; this.fileCache = fileCache; } @@ -558,6 +562,7 @@ public IndicesService( recoverySettings, cacheService, remoteStoreSettings, + null, null ); } @@ -939,7 +944,8 @@ private synchronized IndexService createIndexService( () -> allowExpensiveQueries, indexNameExpressionResolver, recoveryStateFactories, - fileCache + fileCache, + compositeIndexSettings ); for (IndexingOperationListener operationListener : indexingOperationListeners) { indexModule.addIndexOperationListener(operationListener); @@ -1030,7 +1036,8 @@ public synchronized MapperService createIndexMapperService(IndexMetadata indexMe () -> allowExpensiveQueries, indexNameExpressionResolver, recoveryStateFactories, - fileCache + fileCache, + compositeIndexSettings ); pluginsService.onIndexModule(indexModule); return indexModule.newIndexMapperService(xContentRegistry, mapperRegistry, scriptService); @@ -2098,4 +2105,8 @@ private TimeValue getClusterDefaultRefreshInterval() { public RemoteStoreSettings getRemoteStoreSettings() { return this.remoteStoreSettings; } + + public CompositeIndexSettings getCompositeIndexSettings() { + return this.compositeIndexSettings; + } } diff --git a/server/src/main/java/org/opensearch/ingest/AbstractBatchingProcessor.java b/server/src/main/java/org/opensearch/ingest/AbstractBatchingProcessor.java new file mode 100644 index 0000000000000..55413b9bbdad1 --- /dev/null +++ b/server/src/main/java/org/opensearch/ingest/AbstractBatchingProcessor.java @@ -0,0 +1,136 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.ingest; + +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; +import java.util.Map; +import java.util.concurrent.atomic.AtomicInteger; +import java.util.function.Consumer; + +import static org.opensearch.ingest.ConfigurationUtils.newConfigurationException; + +/** + * Abstract base class for batch processors. + * + * @opensearch.internal + */ +public abstract class AbstractBatchingProcessor extends AbstractProcessor { + + public static final String BATCH_SIZE_FIELD = "batch_size"; + private static final int DEFAULT_BATCH_SIZE = 1; + protected final int batchSize; + + protected AbstractBatchingProcessor(String tag, String description, int batchSize) { + super(tag, description); + this.batchSize = batchSize; + } + + /** + * Internal logic to process batched documents, must be implemented by concrete batch processors. + * + * @param ingestDocumentWrappers {@link List} of {@link IngestDocumentWrapper} to be processed. + * @param handler {@link Consumer} to be called with the results of the processing. + */ + protected abstract void subBatchExecute( + List ingestDocumentWrappers, + Consumer> handler + ); + + @Override + public void batchExecute(List ingestDocumentWrappers, Consumer> handler) { + if (ingestDocumentWrappers.isEmpty()) { + handler.accept(Collections.emptyList()); + return; + } + + // if batch size is larger than document size, send one batch + if (this.batchSize >= ingestDocumentWrappers.size()) { + subBatchExecute(ingestDocumentWrappers, handler); + return; + } + + // split documents into multiple batches and send each batch to batch processors + List> batches = cutBatches(ingestDocumentWrappers); + int size = ingestDocumentWrappers.size(); + AtomicInteger counter = new AtomicInteger(size); + List allResults = Collections.synchronizedList(new ArrayList<>()); + for (List batch : batches) { + this.subBatchExecute(batch, batchResults -> { + allResults.addAll(batchResults); + if (counter.addAndGet(-batchResults.size()) == 0) { + handler.accept(allResults); + } + assert counter.get() >= 0 : "counter is negative"; + }); + } + } + + private List> cutBatches(List ingestDocumentWrappers) { + List> batches = new ArrayList<>(); + for (int i = 0; i < ingestDocumentWrappers.size(); i += this.batchSize) { + batches.add(ingestDocumentWrappers.subList(i, Math.min(i + this.batchSize, ingestDocumentWrappers.size()))); + } + return batches; + } + + /** + * Factory class for creating {@link AbstractBatchingProcessor} instances. + * + * @opensearch.internal + */ + public abstract static class Factory implements Processor.Factory { + final String processorType; + + protected Factory(String processorType) { + this.processorType = processorType; + } + + /** + * Creates a new processor instance. + * + * @param processorFactories The processor factories. + * @param tag The processor tag. + * @param description The processor description. + * @param config The processor configuration. + * @return The new AbstractBatchProcessor instance. + * @throws Exception If the processor could not be created. + */ + @Override + public AbstractBatchingProcessor create( + Map processorFactories, + String tag, + String description, + Map config + ) throws Exception { + int batchSize = ConfigurationUtils.readIntProperty(this.processorType, tag, config, BATCH_SIZE_FIELD, DEFAULT_BATCH_SIZE); + if (batchSize < 1) { + throw newConfigurationException(this.processorType, tag, BATCH_SIZE_FIELD, "batch size must be a positive integer"); + } + return newProcessor(tag, description, batchSize, config); + } + + /** + * Returns a new processor instance. + * + * @param tag tag of the processor + * @param description description of the processor + * @param batchSize batch size of the processor + * @param config configuration of the processor + * @return a new batch processor instance + */ + protected abstract AbstractBatchingProcessor newProcessor( + String tag, + String description, + int batchSize, + Map config + ); + } +} diff --git a/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java b/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java index f93cb63ff1f0a..db77ec7628e76 100644 --- a/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java +++ b/server/src/main/java/org/opensearch/monitor/fs/FsProbe.java @@ -81,7 +81,11 @@ public FsInfo stats(FsInfo previous) throws IOException { if (fileCache != null && dataLocations[i].fileCacheReservedSize != ByteSizeValue.ZERO) { paths[i].fileCacheReserved = adjustForHugeFilesystems(dataLocations[i].fileCacheReservedSize.getBytes()); paths[i].fileCacheUtilized = adjustForHugeFilesystems(fileCache.usage().usage()); - paths[i].available -= (paths[i].fileCacheReserved - paths[i].fileCacheUtilized); + // fileCacheFree will be less than zero if the cache being over-subscribed + long fileCacheFree = paths[i].fileCacheReserved - paths[i].fileCacheUtilized; + if (fileCacheFree > 0) { + paths[i].available -= fileCacheFree; + } // occurs if reserved file cache space is occupied by other files, like local indices if (paths[i].available < 0) { paths[i].available = 0; @@ -215,4 +219,11 @@ public static FsInfo.Path getFSInfo(NodePath nodePath) throws IOException { return fsPath; } + public static long getTotalSize(NodePath nodePath) throws IOException { + return adjustForHugeFilesystems(nodePath.fileStore.getTotalSpace()); + } + + public static long getAvailableSize(NodePath nodePath) throws IOException { + return adjustForHugeFilesystems(nodePath.fileStore.getUsableSpace()); + } } diff --git a/server/src/main/java/org/opensearch/node/Node.java b/server/src/main/java/org/opensearch/node/Node.java index a91dce4ece126..85ef547e27787 100644 --- a/server/src/main/java/org/opensearch/node/Node.java +++ b/server/src/main/java/org/opensearch/node/Node.java @@ -38,6 +38,7 @@ import org.opensearch.Build; import org.opensearch.ExceptionsHelper; import org.opensearch.OpenSearchException; +import org.opensearch.OpenSearchParseException; import org.opensearch.OpenSearchTimeoutException; import org.opensearch.Version; import org.opensearch.action.ActionModule; @@ -108,6 +109,7 @@ import org.opensearch.common.settings.Settings; import org.opensearch.common.settings.SettingsException; import org.opensearch.common.settings.SettingsModule; +import org.opensearch.common.unit.RatioValue; import org.opensearch.common.unit.TimeValue; import org.opensearch.common.util.BigArrays; import org.opensearch.common.util.FeatureFlags; @@ -147,6 +149,7 @@ import org.opensearch.index.IndexingPressureService; import org.opensearch.index.SegmentReplicationStatsTracker; import org.opensearch.index.analysis.AnalysisRegistry; +import org.opensearch.index.compositeindex.CompositeIndexSettings; import org.opensearch.index.engine.EngineFactory; import org.opensearch.index.recovery.RemoteStoreRestoreService; import org.opensearch.index.remote.RemoteIndexPathUploader; @@ -176,7 +179,6 @@ import org.opensearch.ingest.IngestService; import org.opensearch.monitor.MonitorService; import org.opensearch.monitor.fs.FsHealthService; -import org.opensearch.monitor.fs.FsInfo; import org.opensearch.monitor.fs.FsProbe; import org.opensearch.monitor.jvm.JvmInfo; import org.opensearch.node.remotestore.RemoteStoreNodeService; @@ -372,9 +374,12 @@ public class Node implements Closeable { } }, Setting.Property.NodeScope); - public static final Setting NODE_SEARCH_CACHE_SIZE_SETTING = Setting.byteSizeSetting( + private static final String ZERO = "0"; + + public static final Setting NODE_SEARCH_CACHE_SIZE_SETTING = new Setting<>( "node.search.cache.size", - ByteSizeValue.ZERO, + s -> (FeatureFlags.isEnabled(FeatureFlags.TIERED_REMOTE_INDEX_SETTING) || DiscoveryNode.isDedicatedSearchNode(s)) ? "80%" : ZERO, + Node::validateFileCacheSize, Property.NodeScope ); @@ -830,6 +835,7 @@ protected Node( final RecoverySettings recoverySettings = new RecoverySettings(settings, settingsModule.getClusterSettings()); final RemoteStoreSettings remoteStoreSettings = new RemoteStoreSettings(settings, settingsModule.getClusterSettings()); + final CompositeIndexSettings compositeIndexSettings = new CompositeIndexSettings(settings, settingsModule.getClusterSettings()); final IndexStorePlugin.DirectoryFactory remoteDirectoryFactory = new RemoteSegmentStoreDirectoryFactory( repositoriesServiceReference::get, @@ -870,7 +876,8 @@ protected Node( recoverySettings, cacheService, remoteStoreSettings, - fileCache + fileCache, + compositeIndexSettings ); final IngestService ingestService = new IngestService( @@ -2002,43 +2009,59 @@ DiscoveryNode getNode() { * Initializes the search cache with a defined capacity. * The capacity of the cache is based on user configuration for {@link Node#NODE_SEARCH_CACHE_SIZE_SETTING}. * If the user doesn't configure the cache size, it fails if the node is a data + search node. - * Else it configures the size to 80% of available capacity for a dedicated search node, if not explicitly defined. + * Else it configures the size to 80% of total capacity for a dedicated search node, if not explicitly defined. */ private void initializeFileCache(Settings settings, CircuitBreaker circuitBreaker) throws IOException { boolean isWritableRemoteIndexEnabled = FeatureFlags.isEnabled(FeatureFlags.TIERED_REMOTE_INDEX_SETTING); - if (DiscoveryNode.isSearchNode(settings) || isWritableRemoteIndexEnabled) { - NodeEnvironment.NodePath fileCacheNodePath = nodeEnvironment.fileCacheNodePath(); - long capacity = NODE_SEARCH_CACHE_SIZE_SETTING.get(settings).getBytes(); - FsInfo.Path info = ExceptionsHelper.catchAsRuntimeException(() -> FsProbe.getFSInfo(fileCacheNodePath)); - long availableCapacity = info.getAvailable().getBytes(); - - // Initialize default values for cache if NODE_SEARCH_CACHE_SIZE_SETTING is not set. - if (capacity == 0) { - // If node is not a dedicated search node without configuration, prevent cache initialization - if (!isWritableRemoteIndexEnabled - && DiscoveryNode.getRolesFromSettings(settings) - .stream() - .anyMatch(role -> !DiscoveryNodeRole.SEARCH_ROLE.equals(role))) { - throw new SettingsException( - "Unable to initialize the " - + DiscoveryNodeRole.SEARCH_ROLE.roleName() - + "-" - + DiscoveryNodeRole.DATA_ROLE.roleName() - + " node: Missing value for configuration " - + NODE_SEARCH_CACHE_SIZE_SETTING.getKey() - ); - } else { - capacity = 80 * availableCapacity / 100; - } + if (DiscoveryNode.isSearchNode(settings) == false && isWritableRemoteIndexEnabled == false) { + return; + } + + String capacityRaw = NODE_SEARCH_CACHE_SIZE_SETTING.get(settings); + logger.info("cache size [{}]", capacityRaw); + if (capacityRaw.equals(ZERO)) { + throw new SettingsException( + "Unable to initialize the " + + DiscoveryNodeRole.SEARCH_ROLE.roleName() + + "-" + + DiscoveryNodeRole.DATA_ROLE.roleName() + + " node: Missing value for configuration " + + NODE_SEARCH_CACHE_SIZE_SETTING.getKey() + ); + } + + NodeEnvironment.NodePath fileCacheNodePath = nodeEnvironment.fileCacheNodePath(); + long totalSpace = ExceptionsHelper.catchAsRuntimeException(() -> FsProbe.getTotalSize(fileCacheNodePath)); + long capacity = calculateFileCacheSize(capacityRaw, totalSpace); + if (capacity <= 0 || totalSpace <= capacity) { + throw new SettingsException("Cache size must be larger than zero and less than total capacity"); + } + + this.fileCache = FileCacheFactory.createConcurrentLRUFileCache(capacity, circuitBreaker); + fileCacheNodePath.fileCacheReservedSize = new ByteSizeValue(this.fileCache.capacity(), ByteSizeUnit.BYTES); + List fileCacheDataPaths = collectFileCacheDataPath(fileCacheNodePath); + this.fileCache.restoreFromDirectory(fileCacheDataPaths); + } + + private static long calculateFileCacheSize(String capacityRaw, long totalSpace) { + try { + RatioValue ratioValue = RatioValue.parseRatioValue(capacityRaw); + return Math.round(totalSpace * ratioValue.getAsRatio()); + } catch (OpenSearchParseException e) { + try { + return ByteSizeValue.parseBytesSizeValue(capacityRaw, NODE_SEARCH_CACHE_SIZE_SETTING.getKey()).getBytes(); + } catch (OpenSearchParseException ex) { + ex.addSuppressed(e); + throw ex; } - capacity = Math.min(capacity, availableCapacity); - fileCacheNodePath.fileCacheReservedSize = new ByteSizeValue(capacity, ByteSizeUnit.BYTES); - this.fileCache = FileCacheFactory.createConcurrentLRUFileCache(capacity, circuitBreaker); - List fileCacheDataPaths = collectFileCacheDataPath(fileCacheNodePath); - this.fileCache.restoreFromDirectory(fileCacheDataPaths); } } + private static String validateFileCacheSize(String capacityRaw) { + calculateFileCacheSize(capacityRaw, 0L); + return capacityRaw; + } + /** * Returns the {@link FileCache} instance for remote search node * Note: Visible for testing diff --git a/server/src/main/resources/META-INF/services/org.apache.lucene.codecs.Codec b/server/src/main/resources/META-INF/services/org.apache.lucene.codecs.Codec new file mode 100644 index 0000000000000..e030a813373c1 --- /dev/null +++ b/server/src/main/resources/META-INF/services/org.apache.lucene.codecs.Codec @@ -0,0 +1 @@ +org.opensearch.index.codec.composite.Composite99Codec diff --git a/server/src/test/java/org/opensearch/cluster/metadata/MetadataTests.java b/server/src/test/java/org/opensearch/cluster/metadata/MetadataTests.java index 618fcb923bc60..a434a713f330b 100644 --- a/server/src/test/java/org/opensearch/cluster/metadata/MetadataTests.java +++ b/server/src/test/java/org/opensearch/cluster/metadata/MetadataTests.java @@ -1482,6 +1482,33 @@ public void testIsSegmentReplicationDisabled() { assertFalse(metadata.isSegmentReplicationEnabled(indexName)); } + public void testTemplatesMetadata() { + TemplatesMetadata templatesMetadata1 = TemplatesMetadata.builder() + .put( + IndexTemplateMetadata.builder("template_1") + .patterns(Arrays.asList("bar-*", "foo-*")) + .settings(Settings.builder().put("random_index_setting_" + randomAlphaOfLength(3), randomAlphaOfLength(5)).build()) + .build() + ) + .build(); + Metadata metadata1 = Metadata.builder().templates(templatesMetadata1).build(); + assertThat(metadata1.templates(), is(templatesMetadata1.getTemplates())); + + TemplatesMetadata templatesMetadata2 = TemplatesMetadata.builder() + .put( + IndexTemplateMetadata.builder("template_2") + .patterns(Arrays.asList("bar-*", "foo-*")) + .settings(Settings.builder().put("random_index_setting_" + randomAlphaOfLength(3), randomAlphaOfLength(5)).build()) + .build() + ) + .build(); + + Metadata metadata2 = Metadata.builder(metadata1).templates(templatesMetadata2).build(); + + assertThat(metadata2.templates(), is(templatesMetadata2.getTemplates())); + + } + public static Metadata randomMetadata() { Metadata.Builder md = Metadata.builder() .put(buildIndexMetadata("index", "alias", randomBoolean() ? null : randomBoolean()).build(), randomBoolean()) diff --git a/server/src/test/java/org/opensearch/common/cache/serializer/ICacheKeySerializerTests.java b/server/src/test/java/org/opensearch/common/cache/serializer/ICacheKeySerializerTests.java index 7713fdf1d0adc..4b0fc3d2a7366 100644 --- a/server/src/test/java/org/opensearch/common/cache/serializer/ICacheKeySerializerTests.java +++ b/server/src/test/java/org/opensearch/common/cache/serializer/ICacheKeySerializerTests.java @@ -43,10 +43,9 @@ public void testInvalidInput() throws Exception { ICacheKeySerializer serializer = new ICacheKeySerializer<>(keySer); Random rand = Randomness.get(); - byte[] randomInput = new byte[1000]; - rand.nextBytes(randomInput); - - assertThrows(OpenSearchException.class, () -> serializer.deserialize(randomInput)); + // The first thing the serializer reads is a VInt for the number of dimensions. + // This is an invalid input for StreamInput.readVInt(), so we are guaranteed to have an exception + assertThrows(OpenSearchException.class, () -> serializer.deserialize(new byte[] { -1, -1, -1, -1, -1 })); } public void testDimNumbers() throws Exception { diff --git a/server/src/test/java/org/opensearch/common/cache/stats/DefaultCacheStatsHolderTests.java b/server/src/test/java/org/opensearch/common/cache/stats/DefaultCacheStatsHolderTests.java index c6e8252ddf806..8a59dd9d2d105 100644 --- a/server/src/test/java/org/opensearch/common/cache/stats/DefaultCacheStatsHolderTests.java +++ b/server/src/test/java/org/opensearch/common/cache/stats/DefaultCacheStatsHolderTests.java @@ -127,49 +127,58 @@ public void testCount() throws Exception { } public void testConcurrentRemoval() throws Exception { - List dimensionNames = List.of("dim1", "dim2"); + List dimensionNames = List.of("A", "B"); DefaultCacheStatsHolder cacheStatsHolder = new DefaultCacheStatsHolder(dimensionNames, storeName); // Create stats for the following dimension sets - List> populatedStats = List.of(List.of("A1", "B1"), List.of("A2", "B2"), List.of("A2", "B3")); + List> populatedStats = new ArrayList<>(); + int numAValues = 10; + int numBValues = 2; + for (int indexA = 0; indexA < numAValues; indexA++) { + for (int indexB = 0; indexB < numBValues; indexB++) { + populatedStats.add(List.of("A" + indexA, "B" + indexB)); + } + } for (List dims : populatedStats) { cacheStatsHolder.incrementHits(dims); } - // Remove (A2, B2) and (A1, B1), before re-adding (A2, B2). At the end we should have stats for (A2, B2) but not (A1, B1). - - Thread[] threads = new Thread[3]; - CountDownLatch countDownLatch = new CountDownLatch(3); - threads[0] = new Thread(() -> { - cacheStatsHolder.removeDimensions(List.of("A2", "B2")); - countDownLatch.countDown(); - }); - threads[1] = new Thread(() -> { - cacheStatsHolder.removeDimensions(List.of("A1", "B1")); - countDownLatch.countDown(); - }); - threads[2] = new Thread(() -> { - cacheStatsHolder.incrementMisses(List.of("A2", "B2")); - cacheStatsHolder.incrementMisses(List.of("A2", "B3")); - countDownLatch.countDown(); - }); + // Remove a subset of the dimensions concurrently. + // Remove both (A0, B0), and (A0, B1), so we expect the intermediate node for A0 to be null afterwards. + // For all the others, remove only the B0 value. Then we expect the intermediate nodes for A1 through A9 to be present + // and reflect only the stats for their B1 child. + + Thread[] threads = new Thread[numAValues + 1]; + for (int i = 0; i < numAValues; i++) { + int finalI = i; + threads[i] = new Thread(() -> { cacheStatsHolder.removeDimensions(List.of("A" + finalI, "B0")); }); + } + threads[numAValues] = new Thread(() -> { cacheStatsHolder.removeDimensions(List.of("A0", "B1")); }); for (Thread thread : threads) { thread.start(); - // Add short sleep to ensure threads start their functions in order (so that incrementing doesn't happen before removal) - Thread.sleep(1); } - countDownLatch.await(); - assertNull(getNode(List.of("A1", "B1"), cacheStatsHolder.getStatsRoot())); - assertNull(getNode(List.of("A1"), cacheStatsHolder.getStatsRoot())); - assertNotNull(getNode(List.of("A2", "B2"), cacheStatsHolder.getStatsRoot())); - assertEquals( - new ImmutableCacheStats(0, 1, 0, 0, 0), - getNode(List.of("A2", "B2"), cacheStatsHolder.getStatsRoot()).getImmutableStats() - ); - assertEquals( - new ImmutableCacheStats(1, 1, 0, 0, 0), - getNode(List.of("A2", "B3"), cacheStatsHolder.getStatsRoot()).getImmutableStats() - ); + for (Thread thread : threads) { + thread.join(); + } + + // intermediate node for A0 should be null + assertNull(getNode(List.of("A0"), cacheStatsHolder.getStatsRoot())); + + // leaf nodes for all B0 values should be null since they were removed + for (int indexA = 0; indexA < numAValues; indexA++) { + assertNull(getNode(List.of("A" + indexA, "B0"), cacheStatsHolder.getStatsRoot())); + } + + // leaf nodes for all B1 values, except (A0, B1), should not be null as they weren't removed, + // and the intermediate nodes A1 through A9 shouldn't be null as they have remaining children + for (int indexA = 1; indexA < numAValues; indexA++) { + DefaultCacheStatsHolder.Node b1LeafNode = getNode(List.of("A" + indexA, "B1"), cacheStatsHolder.getStatsRoot()); + assertNotNull(b1LeafNode); + assertEquals(new ImmutableCacheStats(1, 0, 0, 0, 0), b1LeafNode.getImmutableStats()); + DefaultCacheStatsHolder.Node intermediateLevelNode = getNode(List.of("A" + indexA), cacheStatsHolder.getStatsRoot()); + assertNotNull(intermediateLevelNode); + assertEquals(b1LeafNode.getImmutableStats(), intermediateLevelNode.getImmutableStats()); + } } /** diff --git a/server/src/test/java/org/opensearch/env/NodeRepurposeCommandTests.java b/server/src/test/java/org/opensearch/env/NodeRepurposeCommandTests.java index 2a3525143c01f..d2d6fdc387dfe 100644 --- a/server/src/test/java/org/opensearch/env/NodeRepurposeCommandTests.java +++ b/server/src/test/java/org/opensearch/env/NodeRepurposeCommandTests.java @@ -95,7 +95,7 @@ public void createNodePaths() throws IOException { dataClusterManagerSettings = buildEnvSettings(Settings.EMPTY); Settings defaultSearchSettings = Settings.builder() .put(dataClusterManagerSettings) - .put(NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), new ByteSizeValue(16, ByteSizeUnit.GB)) + .put(NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), new ByteSizeValue(16, ByteSizeUnit.GB).toString()) .build(); searchNoDataNoClusterManagerSettings = onlyRole(dataClusterManagerSettings, DiscoveryNodeRole.SEARCH_ROLE); diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java index 41e1546ead164..fe9ed57fa77b8 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateAttributesManagerTests.java @@ -17,9 +17,10 @@ import org.opensearch.cluster.DiffableUtils; import org.opensearch.cluster.block.ClusterBlocks; import org.opensearch.cluster.node.DiscoveryNodes; -import org.opensearch.common.CheckedRunnable; +import org.opensearch.common.blobstore.BlobPath; import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.TestCapturingListener; import org.opensearch.core.action.ActionListener; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; import org.opensearch.core.common.io.stream.StreamInput; @@ -28,6 +29,7 @@ import org.opensearch.core.compress.NoneCompressor; import org.opensearch.core.xcontent.XContentBuilder; import org.opensearch.gateway.remote.model.RemoteClusterBlocks; +import org.opensearch.gateway.remote.model.RemoteClusterStateCustoms; import org.opensearch.gateway.remote.model.RemoteDiscoveryNodes; import org.opensearch.gateway.remote.model.RemoteReadResult; import org.opensearch.index.translog.transfer.BlobStoreTransferService; @@ -36,46 +38,63 @@ import org.opensearch.threadpool.TestThreadPool; import org.opensearch.threadpool.ThreadPool; import org.junit.After; -import org.junit.Assert; import org.junit.Before; import java.io.IOException; +import java.io.InputStream; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.concurrent.CountDownLatch; -import java.util.concurrent.atomic.AtomicReference; -import static java.util.Collections.emptyList; +import static org.opensearch.common.blobstore.stream.write.WritePriority.URGENT; +import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.CLUSTER_STATE_ATTRIBUTE; +import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION; import static org.opensearch.gateway.remote.RemoteClusterStateAttributesManager.DISCOVERY_NODES; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_EPHEMERAL_PATH_TOKEN; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CLUSTER_STATE_PATH_TOKEN; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.CUSTOM_DELIMITER; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.DELIMITER; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.PATH_DELIMITER; +import static org.opensearch.gateway.remote.RemoteClusterStateUtils.encodeString; import static org.opensearch.gateway.remote.model.RemoteClusterBlocks.CLUSTER_BLOCKS; import static org.opensearch.gateway.remote.model.RemoteClusterBlocks.CLUSTER_BLOCKS_FORMAT; import static org.opensearch.gateway.remote.model.RemoteClusterBlocksTests.randomClusterBlocks; +import static org.opensearch.gateway.remote.model.RemoteClusterStateCustoms.CLUSTER_STATE_CUSTOM; +import static org.opensearch.gateway.remote.model.RemoteClusterStateCustomsTests.getClusterStateCustom; import static org.opensearch.gateway.remote.model.RemoteDiscoveryNodes.DISCOVERY_NODES_FORMAT; import static org.opensearch.gateway.remote.model.RemoteDiscoveryNodesTests.getDiscoveryNodes; +import static org.opensearch.index.remote.RemoteStoreUtils.invertLong; import static org.hamcrest.Matchers.is; +import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyIterable; import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.ArgumentMatchers.eq; +import static org.mockito.Mockito.doAnswer; import static org.mockito.Mockito.mock; import static org.mockito.Mockito.when; public class RemoteClusterStateAttributesManagerTests extends OpenSearchTestCase { private RemoteClusterStateAttributesManager remoteClusterStateAttributesManager; private BlobStoreTransferService blobStoreTransferService; - private BlobStoreRepository blobStoreRepository; private Compressor compressor; - private ThreadPool threadPool = new TestThreadPool(RemoteClusterStateAttributesManagerTests.class.getName()); + private final ThreadPool threadPool = new TestThreadPool(RemoteClusterStateAttributesManagerTests.class.getName()); + private final long VERSION = 7331L; + private NamedWriteableRegistry namedWriteableRegistry; + private final String CLUSTER_NAME = "test-cluster"; + private final String CLUSTER_UUID = "test-cluster-uuid"; @Before public void setup() throws Exception { ClusterSettings clusterSettings = new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS); - NamedWriteableRegistry namedWriteableRegistry = new NamedWriteableRegistry(emptyList()); - blobStoreRepository = mock(BlobStoreRepository.class); + namedWriteableRegistry = writableRegistry(); + BlobStoreRepository blobStoreRepository = mock(BlobStoreRepository.class); + when(blobStoreRepository.basePath()).thenReturn(new BlobPath()); blobStoreTransferService = mock(BlobStoreTransferService.class); compressor = new NoneCompressor(); remoteClusterStateAttributesManager = new RemoteClusterStateAttributesManager( - "test-cluster", + CLUSTER_NAME, blobStoreRepository, blobStoreTransferService, writableRegistry(), @@ -89,7 +108,40 @@ public void tearDown() throws Exception { threadPool.shutdown(); } - public void testGetAsyncMetadataReadAction_DiscoveryNodes() throws IOException { + public void testGetAsyncMetadataWriteAction_DiscoveryNodes() throws IOException, InterruptedException { + DiscoveryNodes discoveryNodes = getDiscoveryNodes(); + RemoteDiscoveryNodes remoteDiscoveryNodes = new RemoteDiscoveryNodes(discoveryNodes, VERSION, CLUSTER_UUID, compressor); + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onResponse(null); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); + final CountDownLatch latch = new CountDownLatch(1); + final TestCapturingListener listener = new TestCapturingListener<>(); + remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + DISCOVERY_NODES, + remoteDiscoveryNodes, + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(DISCOVERY_NODES, listener.getResult().getComponent()); + String uploadedFileName = listener.getResult().getUploadedFilename(); + String[] pathTokens = uploadedFileName.split(PATH_DELIMITER); + assertEquals(5, pathTokens.length); + assertEquals(RemoteClusterStateUtils.encodeString(CLUSTER_NAME), pathTokens[0]); + assertEquals(CLUSTER_STATE_PATH_TOKEN, pathTokens[1]); + assertEquals(CLUSTER_UUID, pathTokens[2]); + assertEquals(CLUSTER_STATE_EPHEMERAL_PATH_TOKEN, pathTokens[3]); + String[] splitFileName = pathTokens[4].split(DELIMITER); + assertEquals(4, splitFileName.length); + assertEquals(DISCOVERY_NODES, splitFileName[0]); + assertEquals(invertLong(VERSION), splitFileName[1]); + assertEquals(CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); + } + + public void testGetAsyncMetadataReadAction_DiscoveryNodes() throws IOException, InterruptedException { DiscoveryNodes discoveryNodes = getDiscoveryNodes(); String fileName = randomAlphaOfLength(10); when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenReturn( @@ -97,29 +149,57 @@ public void testGetAsyncMetadataReadAction_DiscoveryNodes() throws IOException { ); RemoteDiscoveryNodes remoteObjForDownload = new RemoteDiscoveryNodes(fileName, "cluster-uuid", compressor); CountDownLatch latch = new CountDownLatch(1); - AtomicReference readDiscoveryNodes = new AtomicReference<>(); - LatchedActionListener assertingListener = new LatchedActionListener<>( - ActionListener.wrap(response -> readDiscoveryNodes.set((DiscoveryNodes) response.getObj()), Assert::assertNull), - latch - ); - CheckedRunnable runnable = remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + TestCapturingListener listener = new TestCapturingListener<>(); + remoteClusterStateAttributesManager.getAsyncMetadataReadAction( DISCOVERY_NODES, remoteObjForDownload, - assertingListener - ); + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(CLUSTER_STATE_ATTRIBUTE, listener.getResult().getComponent()); + assertEquals(DISCOVERY_NODES, listener.getResult().getComponentName()); + DiscoveryNodes readDiscoveryNodes = (DiscoveryNodes) listener.getResult().getObj(); + assertEquals(discoveryNodes.getSize(), readDiscoveryNodes.getSize()); + discoveryNodes.getNodes().forEach((nodeId, node) -> assertEquals(readDiscoveryNodes.get(nodeId), node)); + assertEquals(discoveryNodes.getClusterManagerNodeId(), readDiscoveryNodes.getClusterManagerNodeId()); + } - try { - runnable.run(); - latch.await(); - assertEquals(discoveryNodes.getSize(), readDiscoveryNodes.get().getSize()); - discoveryNodes.getNodes().forEach((nodeId, node) -> assertEquals(readDiscoveryNodes.get().get(nodeId), node)); - assertEquals(discoveryNodes.getClusterManagerNodeId(), readDiscoveryNodes.get().getClusterManagerNodeId()); - } catch (Exception e) { - throw new RuntimeException(e); - } + public void testGetAsyncMetadataWriteAction_ClusterBlocks() throws IOException, InterruptedException { + ClusterBlocks clusterBlocks = randomClusterBlocks(); + RemoteClusterBlocks remoteClusterBlocks = new RemoteClusterBlocks(clusterBlocks, VERSION, CLUSTER_UUID, compressor); + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onResponse(null); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); + final CountDownLatch latch = new CountDownLatch(1); + final TestCapturingListener listener = new TestCapturingListener<>(); + remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + CLUSTER_BLOCKS, + remoteClusterBlocks, + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(CLUSTER_BLOCKS, listener.getResult().getComponent()); + String uploadedFileName = listener.getResult().getUploadedFilename(); + String[] pathTokens = uploadedFileName.split(PATH_DELIMITER); + assertEquals(5, pathTokens.length); + assertEquals(encodeString(CLUSTER_NAME), pathTokens[0]); + assertEquals(CLUSTER_STATE_PATH_TOKEN, pathTokens[1]); + assertEquals(CLUSTER_UUID, pathTokens[2]); + assertEquals(CLUSTER_STATE_EPHEMERAL_PATH_TOKEN, pathTokens[3]); + String[] splitFileName = pathTokens[4].split(DELIMITER); + assertEquals(4, splitFileName.length); + assertEquals(CLUSTER_BLOCKS, splitFileName[0]); + assertEquals(invertLong(VERSION), splitFileName[1]); + assertEquals(CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); } - public void testGetAsyncMetadataReadAction_ClusterBlocks() throws IOException { + public void testGetAsyncMetadataReadAction_ClusterBlocks() throws IOException, InterruptedException { ClusterBlocks clusterBlocks = randomClusterBlocks(); String fileName = randomAlphaOfLength(10); when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenReturn( @@ -127,29 +207,133 @@ public void testGetAsyncMetadataReadAction_ClusterBlocks() throws IOException { ); RemoteClusterBlocks remoteClusterBlocks = new RemoteClusterBlocks(fileName, "cluster-uuid", compressor); CountDownLatch latch = new CountDownLatch(1); - AtomicReference readClusterBlocks = new AtomicReference<>(); - LatchedActionListener assertingListener = new LatchedActionListener<>( - ActionListener.wrap(response -> readClusterBlocks.set((ClusterBlocks) response.getObj()), Assert::assertNull), - latch - ); + TestCapturingListener listener = new TestCapturingListener<>(); - CheckedRunnable runnable = remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + remoteClusterStateAttributesManager.getAsyncMetadataReadAction( CLUSTER_BLOCKS, remoteClusterBlocks, - assertingListener + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(CLUSTER_STATE_ATTRIBUTE, listener.getResult().getComponent()); + assertEquals(CLUSTER_BLOCKS, listener.getResult().getComponentName()); + ClusterBlocks readClusterBlocks = (ClusterBlocks) listener.getResult().getObj(); + assertEquals(clusterBlocks.global(), readClusterBlocks.global()); + assertEquals(clusterBlocks.indices().keySet(), readClusterBlocks.indices().keySet()); + for (String index : clusterBlocks.indices().keySet()) { + assertEquals(clusterBlocks.indices().get(index), readClusterBlocks.indices().get(index)); + } + } + + public void testGetAsyncMetadataWriteAction_Custom() throws IOException, InterruptedException { + Custom custom = getClusterStateCustom(); + RemoteClusterStateCustoms remoteClusterStateCustoms = new RemoteClusterStateCustoms( + custom, + custom.getWriteableName(), + VERSION, + CLUSTER_UUID, + compressor, + namedWriteableRegistry ); + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onResponse(null); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); + final TestCapturingListener listener = new TestCapturingListener<>(); + final CountDownLatch latch = new CountDownLatch(1); + remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + CLUSTER_STATE_CUSTOM, + remoteClusterStateCustoms, + new LatchedActionListener<>(listener, latch) + ).run(); + latch.await(); + assertNull(listener.getFailure()); + assertNotNull(listener.getResult()); + assertEquals(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, custom.getWriteableName()), listener.getResult().getComponent()); + String uploadedFileName = listener.getResult().getUploadedFilename(); + String[] pathTokens = uploadedFileName.split(PATH_DELIMITER); + assertEquals(5, pathTokens.length); + assertEquals(encodeString(CLUSTER_NAME), pathTokens[0]); + assertEquals(CLUSTER_STATE_PATH_TOKEN, pathTokens[1]); + assertEquals(CLUSTER_UUID, pathTokens[2]); + assertEquals(CLUSTER_STATE_EPHEMERAL_PATH_TOKEN, pathTokens[3]); + String[] splitFileName = pathTokens[4].split(DELIMITER); + assertEquals(4, splitFileName.length); + assertEquals(String.join(CUSTOM_DELIMITER, CLUSTER_STATE_CUSTOM, custom.getWriteableName()), splitFileName[0]); + assertEquals(invertLong(VERSION), splitFileName[1]); + assertEquals(CLUSTER_STATE_ATTRIBUTES_CURRENT_CODEC_VERSION, Integer.parseInt(splitFileName[3])); + } - try { - runnable.run(); - latch.await(); - assertEquals(clusterBlocks.global(), readClusterBlocks.get().global()); - assertEquals(clusterBlocks.indices().keySet(), readClusterBlocks.get().indices().keySet()); - for (String index : clusterBlocks.indices().keySet()) { - assertEquals(clusterBlocks.indices().get(index), readClusterBlocks.get().indices().get(index)); - } - } catch (Exception e) { - throw new RuntimeException(e); - } + public void testGetAsyncMetadataReadAction_Custom() throws IOException, InterruptedException { + Custom custom = getClusterStateCustom(); + String fileName = randomAlphaOfLength(10); + RemoteClusterStateCustoms remoteClusterStateCustoms = new RemoteClusterStateCustoms( + fileName, + custom.getWriteableName(), + CLUSTER_UUID, + compressor, + namedWriteableRegistry + ); + when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenReturn( + remoteClusterStateCustoms.clusterStateCustomsFormat.serialize(custom, fileName, compressor).streamInput() + ); + TestCapturingListener capturingListener = new TestCapturingListener<>(); + final CountDownLatch latch = new CountDownLatch(1); + remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + CLUSTER_STATE_CUSTOM, + remoteClusterStateCustoms, + new LatchedActionListener<>(capturingListener, latch) + ).run(); + latch.await(); + assertNull(capturingListener.getFailure()); + assertNotNull(capturingListener.getResult()); + assertEquals(custom, capturingListener.getResult().getObj()); + assertEquals(CLUSTER_STATE_ATTRIBUTE, capturingListener.getResult().getComponent()); + assertEquals(CLUSTER_STATE_CUSTOM, capturingListener.getResult().getComponentName()); + } + + public void testGetAsyncMetadataWriteAction_Exception() throws IOException, InterruptedException { + DiscoveryNodes discoveryNodes = getDiscoveryNodes(); + RemoteDiscoveryNodes remoteDiscoveryNodes = new RemoteDiscoveryNodes(discoveryNodes, VERSION, CLUSTER_UUID, compressor); + + IOException ioException = new IOException("mock test exception"); + doAnswer(invocationOnMock -> { + invocationOnMock.getArgument(4, ActionListener.class).onFailure(ioException); + return null; + }).when(blobStoreTransferService) + .uploadBlob(any(InputStream.class), anyIterable(), anyString(), eq(URGENT), any(ActionListener.class)); + + TestCapturingListener capturingListener = new TestCapturingListener<>(); + final CountDownLatch latch = new CountDownLatch(1); + remoteClusterStateAttributesManager.getAsyncMetadataWriteAction( + DISCOVERY_NODES, + remoteDiscoveryNodes, + new LatchedActionListener<>(capturingListener, latch) + ).run(); + latch.await(); + assertNull(capturingListener.getResult()); + assertTrue(capturingListener.getFailure() instanceof RemoteStateTransferException); + assertEquals(ioException, capturingListener.getFailure().getCause()); + } + + public void testGetAsyncMetadataReadAction_Exception() throws IOException, InterruptedException { + String fileName = randomAlphaOfLength(10); + RemoteDiscoveryNodes remoteDiscoveryNodes = new RemoteDiscoveryNodes(fileName, CLUSTER_UUID, compressor); + Exception ioException = new IOException("mock test exception"); + when(blobStoreTransferService.downloadBlob(anyIterable(), anyString())).thenThrow(ioException); + CountDownLatch latch = new CountDownLatch(1); + TestCapturingListener capturingListener = new TestCapturingListener<>(); + remoteClusterStateAttributesManager.getAsyncMetadataReadAction( + DISCOVERY_NODES, + remoteDiscoveryNodes, + new LatchedActionListener<>(capturingListener, latch) + ).run(); + latch.await(); + assertNull(capturingListener.getResult()); + assertEquals(ioException, capturingListener.getFailure()); } public void testGetUpdatedCustoms() { diff --git a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java index c8fd982fec1e1..d983a4d8c4027 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/RemoteClusterStateServiceTests.java @@ -20,6 +20,7 @@ import org.opensearch.cluster.metadata.IndexTemplateMetadata; import org.opensearch.cluster.metadata.Metadata; import org.opensearch.cluster.metadata.TemplatesMetadata; +import org.opensearch.cluster.node.DiscoveryNode; import org.opensearch.cluster.node.DiscoveryNodes; import org.opensearch.cluster.routing.RoutingTable; import org.opensearch.cluster.routing.remote.InternalRemoteRoutingTableService; @@ -92,6 +93,7 @@ import org.mockito.ArgumentCaptor; import org.mockito.ArgumentMatchers; +import org.mockito.Mockito; import static java.util.stream.Collectors.toList; import static org.opensearch.common.util.FeatureFlags.REMOTE_PUBLICATION_EXPERIMENTAL; @@ -111,6 +113,7 @@ import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_SETTINGS_ATTRIBUTE_KEY_PREFIX; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_REPOSITORY_TYPE_ATTRIBUTE_KEY_FORMAT; import static org.opensearch.node.remotestore.RemoteStoreNodeAttribute.REMOTE_STORE_ROUTING_TABLE_REPOSITORY_NAME_ATTRIBUTE_KEY; +import static org.hamcrest.Matchers.anEmptyMap; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.is; import static org.hamcrest.Matchers.not; @@ -118,6 +121,7 @@ import static org.hamcrest.Matchers.nullValue; import static org.mockito.ArgumentMatchers.any; import static org.mockito.ArgumentMatchers.anyString; +import static org.mockito.ArgumentMatchers.eq; import static org.mockito.Mockito.doAnswer; import static org.mockito.Mockito.doThrow; import static org.mockito.Mockito.mock; @@ -518,11 +522,13 @@ public void testWriteIncrementalMetadataSuccess() throws IOException { final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder().indices(Collections.emptyList()).build(); remoteClusterStateService.start(); - final ClusterMetadataManifest manifest = remoteClusterStateService.writeIncrementalMetadata( + final RemoteClusterStateService rcssSpy = Mockito.spy(remoteClusterStateService); + final RemoteClusterStateManifestInfo manifestInfo = rcssSpy.writeIncrementalMetadata( previousClusterState, clusterState, previousManifest - ).getClusterMetadataManifest(); + ); + final ClusterMetadataManifest manifest = manifestInfo.getClusterMetadataManifest(); final UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "index-uuid", "metadata-filename__2"); final List indices = List.of(uploadedIndexMetadata); @@ -535,6 +541,24 @@ public void testWriteIncrementalMetadataSuccess() throws IOException { .previousClusterUUID("prev-cluster-uuid") .build(); + Mockito.verify(rcssSpy) + .writeMetadataInParallel( + eq(clusterState), + eq(new ArrayList(clusterState.metadata().indices().values())), + eq(Collections.singletonMap(indices.get(0).getIndexName(), null)), + eq(clusterState.metadata().customs()), + eq(true), + eq(true), + eq(true), + eq(false), + eq(false), + eq(false), + eq(Collections.emptyMap()), + eq(false), + eq(Collections.emptyList()) + ); + + assertThat(manifestInfo.getManifestFileName(), notNullValue()); assertThat(manifest.getIndices().size(), is(1)); assertThat(manifest.getIndices().get(0).getIndexName(), is(uploadedIndexMetadata.getIndexName())); assertThat(manifest.getIndices().get(0).getIndexUUID(), is(uploadedIndexMetadata.getIndexUUID())); @@ -543,6 +567,95 @@ public void testWriteIncrementalMetadataSuccess() throws IOException { assertThat(manifest.getStateVersion(), is(expectedManifest.getStateVersion())); assertThat(manifest.getClusterUUID(), is(expectedManifest.getClusterUUID())); assertThat(manifest.getStateUUID(), is(expectedManifest.getStateUUID())); + assertThat(manifest.getHashesOfConsistentSettings(), nullValue()); + assertThat(manifest.getDiscoveryNodesMetadata(), nullValue()); + assertThat(manifest.getClusterBlocksMetadata(), nullValue()); + assertThat(manifest.getClusterStateCustomMap(), anEmptyMap()); + assertThat(manifest.getTransientSettingsMetadata(), nullValue()); + assertThat(manifest.getTemplatesMetadata(), notNullValue()); + assertThat(manifest.getCoordinationMetadata(), notNullValue()); + assertThat(manifest.getCustomMetadataMap().size(), is(2)); + assertThat(manifest.getIndicesRouting().size(), is(0)); + } + + public void testWriteIncrementalMetadataSuccessWhenPublicationEnabled() throws IOException { + publicationEnabled = true; + Settings nodeSettings = Settings.builder().put(REMOTE_PUBLICATION_EXPERIMENTAL, publicationEnabled).build(); + FeatureFlags.initializeFeatureFlags(nodeSettings); + remoteClusterStateService = new RemoteClusterStateService( + "test-node-id", + repositoriesServiceSupplier, + settings, + clusterService, + () -> 0L, + threadPool, + List.of(new RemoteIndexPathUploader(threadPool, settings, repositoriesServiceSupplier, clusterSettings)), + writableRegistry() + ); + final ClusterState clusterState = generateClusterStateWithOneIndex().nodes(nodesWithLocalNodeClusterManager()).build(); + mockBlobStoreObjects(); + final CoordinationMetadata coordinationMetadata = CoordinationMetadata.builder().term(1L).build(); + final ClusterState previousClusterState = ClusterState.builder(ClusterName.DEFAULT) + .metadata(Metadata.builder().coordinationMetadata(coordinationMetadata)) + .build(); + + final ClusterMetadataManifest previousManifest = ClusterMetadataManifest.builder().indices(Collections.emptyList()).build(); + + remoteClusterStateService.start(); + final RemoteClusterStateService rcssSpy = Mockito.spy(remoteClusterStateService); + final RemoteClusterStateManifestInfo manifestInfo = rcssSpy.writeIncrementalMetadata( + previousClusterState, + clusterState, + previousManifest + ); + final ClusterMetadataManifest manifest = manifestInfo.getClusterMetadataManifest(); + final UploadedIndexMetadata uploadedIndexMetadata = new UploadedIndexMetadata("test-index", "index-uuid", "metadata-filename__2"); + final List indices = List.of(uploadedIndexMetadata); + + final ClusterMetadataManifest expectedManifest = ClusterMetadataManifest.builder() + .indices(indices) + .clusterTerm(1L) + .stateVersion(1L) + .stateUUID("state-uuid") + .clusterUUID("cluster-uuid") + .previousClusterUUID("prev-cluster-uuid") + .build(); + + Mockito.verify(rcssSpy) + .writeMetadataInParallel( + eq(clusterState), + eq(new ArrayList(clusterState.metadata().indices().values())), + eq(Collections.singletonMap(indices.get(0).getIndexName(), null)), + eq(clusterState.metadata().customs()), + eq(true), + eq(true), + eq(true), + eq(true), + eq(false), + eq(false), + eq(Collections.emptyMap()), + eq(true), + Mockito.anyList() + ); + + assertThat(manifestInfo.getManifestFileName(), notNullValue()); + assertThat(manifest.getIndices().size(), is(1)); + assertThat(manifest.getIndices().get(0).getIndexName(), is(uploadedIndexMetadata.getIndexName())); + assertThat(manifest.getIndices().get(0).getIndexUUID(), is(uploadedIndexMetadata.getIndexUUID())); + assertThat(manifest.getIndices().get(0).getUploadedFilename(), notNullValue()); + assertThat(manifest.getClusterTerm(), is(expectedManifest.getClusterTerm())); + assertThat(manifest.getStateVersion(), is(expectedManifest.getStateVersion())); + assertThat(manifest.getClusterUUID(), is(expectedManifest.getClusterUUID())); + assertThat(manifest.getStateUUID(), is(expectedManifest.getStateUUID())); + assertThat(manifest.getHashesOfConsistentSettings(), notNullValue()); + assertThat(manifest.getDiscoveryNodesMetadata(), notNullValue()); + assertThat(manifest.getClusterBlocksMetadata(), nullValue()); + assertThat(manifest.getClusterStateCustomMap(), anEmptyMap()); + assertThat(manifest.getTransientSettingsMetadata(), nullValue()); + assertThat(manifest.getTemplatesMetadata(), notNullValue()); + assertThat(manifest.getCoordinationMetadata(), notNullValue()); + assertThat(manifest.getCustomMetadataMap().size(), is(2)); + assertThat(manifest.getIndicesRouting().size(), is(1)); } /* @@ -2012,7 +2125,9 @@ static ClusterState.Builder generateClusterStateWithOneIndex() { .build(); final CoordinationMetadata coordinationMetadata = CoordinationMetadata.builder().term(1L).build(); final Settings settings = Settings.builder().put("mock-settings", true).build(); - final TemplatesMetadata templatesMetadata = TemplatesMetadata.EMPTY_METADATA; + final TemplatesMetadata templatesMetadata = TemplatesMetadata.builder() + .put(IndexTemplateMetadata.builder("template1").settings(idxSettings).patterns(List.of("test*")).build()) + .build(); final CustomMetadata1 customMetadata1 = new CustomMetadata1("custom-metadata-1"); return ClusterState.builder(ClusterName.DEFAULT) .version(1L) @@ -2025,6 +2140,7 @@ static ClusterState.Builder generateClusterStateWithOneIndex() { .coordinationMetadata(coordinationMetadata) .persistentSettings(settings) .templates(templatesMetadata) + .hashesOfConsistentSettings(Map.of("key1", "value1", "key2", "value2")) .putCustom(customMetadata1.getWriteableName(), customMetadata1) .build() ) @@ -2032,7 +2148,8 @@ static ClusterState.Builder generateClusterStateWithOneIndex() { } static DiscoveryNodes nodesWithLocalNodeClusterManager() { - return DiscoveryNodes.builder().clusterManagerNodeId("cluster-manager-id").localNodeId("cluster-manager-id").build(); + final DiscoveryNode localNode = new DiscoveryNode("cluster-manager-id", buildNewFakeTransportAddress(), Version.CURRENT); + return DiscoveryNodes.builder().clusterManagerNodeId("cluster-manager-id").localNodeId("cluster-manager-id").add(localNode).build(); } private static class CustomMetadata1 extends TestCustomMetadata { diff --git a/server/src/test/java/org/opensearch/gateway/remote/model/RemoteCustomMetadataTests.java b/server/src/test/java/org/opensearch/gateway/remote/model/RemoteCustomMetadataTests.java index 1e28817be79f2..60cceb205f43d 100644 --- a/server/src/test/java/org/opensearch/gateway/remote/model/RemoteCustomMetadataTests.java +++ b/server/src/test/java/org/opensearch/gateway/remote/model/RemoteCustomMetadataTests.java @@ -8,6 +8,8 @@ package org.opensearch.gateway.remote.model; +import org.opensearch.Version; +import org.opensearch.cluster.ClusterModule; import org.opensearch.cluster.metadata.IndexGraveyard; import org.opensearch.cluster.metadata.Metadata.Custom; import org.opensearch.common.blobstore.BlobPath; @@ -16,13 +18,20 @@ import org.opensearch.common.settings.ClusterSettings; import org.opensearch.common.settings.Settings; import org.opensearch.core.common.io.stream.NamedWriteableRegistry; +import org.opensearch.core.common.io.stream.NamedWriteableRegistry.Entry; +import org.opensearch.core.common.io.stream.StreamInput; +import org.opensearch.core.common.io.stream.StreamOutput; import org.opensearch.core.compress.Compressor; import org.opensearch.core.compress.NoneCompressor; import org.opensearch.core.index.Index; +import org.opensearch.core.xcontent.XContentBuilder; import org.opensearch.gateway.remote.ClusterMetadataManifest.UploadedMetadata; import org.opensearch.gateway.remote.RemoteClusterStateUtils; import org.opensearch.index.remote.RemoteStoreUtils; import org.opensearch.index.translog.transfer.BlobStoreTransferService; +import org.opensearch.persistent.PersistentTaskParams; +import org.opensearch.persistent.PersistentTasksCustomMetadata; +import org.opensearch.persistent.PersistentTasksCustomMetadata.Assignment; import org.opensearch.repositories.blobstore.BlobStoreRepository; import org.opensearch.test.OpenSearchTestCase; import org.opensearch.threadpool.TestThreadPool; @@ -33,6 +42,7 @@ import java.io.IOException; import java.io.InputStream; import java.util.List; +import java.util.Objects; import static org.opensearch.gateway.remote.RemoteClusterStateUtils.GLOBAL_METADATA_CURRENT_CODEC_VERSION; import static org.opensearch.gateway.remote.model.RemoteCustomMetadata.CUSTOM_DELIMITER; @@ -216,24 +226,93 @@ public void testGetUploadedMetadata() throws IOException { public void testSerDe() throws IOException { Custom customMetadata = getCustomMetadata(); + verifySerDe(customMetadata, IndexGraveyard.TYPE); + } + + public void testSerDeForPersistentTasks() throws IOException { + Custom customMetadata = getPersistentTasksMetadata(); + verifySerDe(customMetadata, PersistentTasksCustomMetadata.TYPE); + } + + private void verifySerDe(Custom objectToUpload, String objectType) throws IOException { RemoteCustomMetadata remoteObjectForUpload = new RemoteCustomMetadata( - customMetadata, - IndexGraveyard.TYPE, + objectToUpload, + objectType, METADATA_VERSION, clusterUUID, compressor, - namedWriteableRegistry + customWritableRegistry() ); try (InputStream inputStream = remoteObjectForUpload.serialize()) { remoteObjectForUpload.setFullBlobName(BlobPath.cleanPath()); assertThat(inputStream.available(), greaterThan(0)); Custom readCustomMetadata = remoteObjectForUpload.deserialize(inputStream); - assertThat(readCustomMetadata, is(customMetadata)); + assertThat(readCustomMetadata, is(objectToUpload)); } } + private NamedWriteableRegistry customWritableRegistry() { + List entries = ClusterModule.getNamedWriteables(); + entries.add(new Entry(PersistentTaskParams.class, TestPersistentTaskParams.PARAM_NAME, TestPersistentTaskParams::new)); + return new NamedWriteableRegistry(entries); + } + public static Custom getCustomMetadata() { return IndexGraveyard.builder().addTombstone(new Index("test-index", "3q2423")).build(); } + private static Custom getPersistentTasksMetadata() { + return PersistentTasksCustomMetadata.builder() + .addTask("_task_1", "testTaskName", new TestPersistentTaskParams("task param data"), new Assignment(null, "_reason")) + .build(); + } + + public static class TestPersistentTaskParams implements PersistentTaskParams { + + private static final String PARAM_NAME = "testTaskName"; + + private final String data; + + public TestPersistentTaskParams(String data) { + this.data = data; + } + + public TestPersistentTaskParams(StreamInput in) throws IOException { + this(in.readString()); + } + + @Override + public String getWriteableName() { + return PARAM_NAME; + } + + @Override + public Version getMinimalSupportedVersion() { + return Version.V_2_13_0; + } + + @Override + public void writeTo(StreamOutput out) throws IOException { + out.writeString(data); + } + + @Override + public XContentBuilder toXContent(XContentBuilder builder, Params params) throws IOException { + return builder.startObject().field("data_field", data); + } + + @Override + public boolean equals(Object o) { + if (this == o) return true; + if (o == null || getClass() != o.getClass()) return false; + TestPersistentTaskParams that = (TestPersistentTaskParams) o; + return Objects.equals(data, that.data); + } + + @Override + public int hashCode() { + return Objects.hash(data); + } + } + } diff --git a/server/src/test/java/org/opensearch/index/codec/CodecTests.java b/server/src/test/java/org/opensearch/index/codec/CodecTests.java index b31edd79411d0..7146b7dc51753 100644 --- a/server/src/test/java/org/opensearch/index/codec/CodecTests.java +++ b/server/src/test/java/org/opensearch/index/codec/CodecTests.java @@ -48,6 +48,7 @@ import org.opensearch.env.Environment; import org.opensearch.index.IndexSettings; import org.opensearch.index.analysis.IndexAnalyzers; +import org.opensearch.index.codec.composite.Composite99Codec; import org.opensearch.index.engine.EngineConfig; import org.opensearch.index.mapper.MapperService; import org.opensearch.index.similarity.SimilarityService; @@ -59,6 +60,8 @@ import java.io.IOException; import java.util.Collections; +import org.mockito.Mockito; + import static org.opensearch.index.engine.EngineConfig.INDEX_CODEC_COMPRESSION_LEVEL_SETTING; import static org.hamcrest.Matchers.instanceOf; @@ -76,23 +79,52 @@ public void testDefault() throws Exception { assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_SPEED, codec); } + public void testDefaultWithCompositeIndex() throws Exception { + Codec codec = createCodecService(false, true).codec("default"); + assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_SPEED, codec); + assert codec instanceof Composite99Codec; + } + public void testBestCompression() throws Exception { Codec codec = createCodecService(false).codec("best_compression"); assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_COMPRESSION, codec); } + public void testBestCompressionWithCompositeIndex() throws Exception { + Codec codec = createCodecService(false, true).codec("best_compression"); + assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_COMPRESSION, codec); + assert codec instanceof Composite99Codec; + } + public void testLZ4() throws Exception { Codec codec = createCodecService(false).codec("lz4"); assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_SPEED, codec); assert codec instanceof PerFieldMappingPostingFormatCodec; } + public void testLZ4WithCompositeIndex() throws Exception { + Codec codec = createCodecService(false, true).codec("lz4"); + assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_SPEED, codec); + assert codec instanceof Composite99Codec; + } + public void testZlib() throws Exception { Codec codec = createCodecService(false).codec("zlib"); assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_COMPRESSION, codec); assert codec instanceof PerFieldMappingPostingFormatCodec; } + public void testZlibWithCompositeIndex() throws Exception { + Codec codec = createCodecService(false, true).codec("zlib"); + assertStoredFieldsCompressionEquals(Lucene99Codec.Mode.BEST_COMPRESSION, codec); + assert codec instanceof Composite99Codec; + } + + public void testResolveDefaultCodecsWithCompositeIndex() throws Exception { + CodecService codecService = createCodecService(false, true); + assertThat(codecService.codec("default"), instanceOf(Composite99Codec.class)); + } + public void testBestCompressionWithCompressionLevel() { final Settings settings = Settings.builder() .put(INDEX_CODEC_COMPRESSION_LEVEL_SETTING.getKey(), randomIntBetween(1, 6)) @@ -150,10 +182,17 @@ private void assertStoredFieldsCompressionEquals(Lucene99Codec.Mode expected, Co } private CodecService createCodecService(boolean isMapperServiceNull) throws IOException { + return createCodecService(isMapperServiceNull, false); + } + + private CodecService createCodecService(boolean isMapperServiceNull, boolean isCompositeIndexPresent) throws IOException { Settings nodeSettings = Settings.builder().put(Environment.PATH_HOME_SETTING.getKey(), createTempDir()).build(); if (isMapperServiceNull) { return new CodecService(null, IndexSettingsModule.newIndexSettings("_na", nodeSettings), LogManager.getLogger("test")); } + if (isCompositeIndexPresent) { + return buildCodecServiceWithCompositeIndex(nodeSettings); + } return buildCodecService(nodeSettings); } @@ -176,6 +215,14 @@ private CodecService buildCodecService(Settings nodeSettings) throws IOException return new CodecService(service, indexSettings, LogManager.getLogger("test")); } + private CodecService buildCodecServiceWithCompositeIndex(Settings nodeSettings) throws IOException { + + IndexSettings indexSettings = IndexSettingsModule.newIndexSettings("_na", nodeSettings); + MapperService service = Mockito.mock(MapperService.class); + Mockito.when(service.isCompositeIndexPresent()).thenReturn(true); + return new CodecService(service, indexSettings, LogManager.getLogger("test")); + } + private SegmentReader getSegmentReader(Codec codec) throws IOException { Directory dir = newDirectory(); IndexWriterConfig iwc = newIndexWriterConfig(null); diff --git a/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java b/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java new file mode 100644 index 0000000000000..967ed0ca49faa --- /dev/null +++ b/server/src/test/java/org/opensearch/index/codec/composite/datacube/startree/StarTreeDocValuesFormatTests.java @@ -0,0 +1,207 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.codec.composite.datacube.startree; + +import com.carrotsearch.randomizedtesting.annotations.ThreadLeakScope; + +import org.apache.logging.log4j.LogManager; +import org.apache.logging.log4j.Logger; +import org.apache.lucene.codecs.Codec; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.apache.lucene.document.Document; +import org.apache.lucene.document.SortedNumericDocValuesField; +import org.apache.lucene.index.IndexWriterConfig; +import org.apache.lucene.store.Directory; +import org.apache.lucene.tests.index.BaseDocValuesFormatTestCase; +import org.apache.lucene.tests.index.RandomIndexWriter; +import org.apache.lucene.tests.util.LuceneTestCase; +import org.opensearch.Version; +import org.opensearch.cluster.ClusterModule; +import org.opensearch.cluster.metadata.IndexMetadata; +import org.opensearch.common.CheckedConsumer; +import org.opensearch.common.Rounding; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.common.xcontent.XContentFactory; +import org.opensearch.core.xcontent.NamedXContentRegistry; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.MapperTestUtils; +import org.opensearch.index.codec.composite.Composite99Codec; +import org.opensearch.index.compositeindex.datacube.DateDimension; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.NumericDimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.StarTreeMapper; +import org.junit.AfterClass; +import org.junit.BeforeClass; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Collections; +import java.util.List; + +import static org.opensearch.common.util.FeatureFlags.STAR_TREE_INDEX; +import static org.opensearch.index.engine.EngineTestCase.createMapperService; + +/** + * Star tree doc values Lucene tests + */ +@LuceneTestCase.SuppressSysoutChecks(bugUrl = "we log a lot on purpose") +@ThreadLeakScope(ThreadLeakScope.Scope.NONE) +public class StarTreeDocValuesFormatTests extends BaseDocValuesFormatTestCase { + @BeforeClass + public static void createMapper() throws Exception { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(STAR_TREE_INDEX, "true").build()); + } + + @AfterClass + public static void clearMapper() { + FeatureFlags.initializeFeatureFlags(Settings.EMPTY); + } + + @Override + protected Codec getCodec() { + final Logger testLogger = LogManager.getLogger(StarTreeDocValuesFormatTests.class); + MapperService mapperService = null; + try { + mapperService = createMapperService(getExpandedMapping("status", "size")); + } catch (IOException e) { + throw new RuntimeException(e); + } + Codec codec = new Composite99Codec(Lucene99Codec.Mode.BEST_SPEED, mapperService, testLogger); + return codec; + } + + private StarTreeMapper.StarTreeFieldType getStarTreeFieldType() { + List m1 = new ArrayList<>(); + m1.add(MetricStat.MAX); + Metric metric = new Metric("sndv", m1); + List d1CalendarIntervals = new ArrayList<>(); + d1CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + StarTreeField starTreeField = getStarTreeField(d1CalendarIntervals, metric); + + return new StarTreeMapper.StarTreeFieldType("star_tree", starTreeField); + } + + private static StarTreeField getStarTreeField(List d1CalendarIntervals, Metric metric1) { + DateDimension d1 = new DateDimension("field", d1CalendarIntervals); + NumericDimension d2 = new NumericDimension("dv"); + + List metrics = List.of(metric1); + List dims = List.of(d1, d2); + StarTreeFieldConfiguration config = new StarTreeFieldConfiguration( + 100, + Collections.emptySet(), + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP // TODO : change it + ); + + return new StarTreeField("starTree", dims, metrics, config); + } + + public void testStarTreeDocValues() throws IOException { + Directory directory = newDirectory(); + IndexWriterConfig conf = newIndexWriterConfig(null); + conf.setMergePolicy(newLogMergePolicy()); + RandomIndexWriter iw = new RandomIndexWriter(random(), directory, conf); + Document doc = new Document(); + doc.add(new SortedNumericDocValuesField("sndv", 1)); + doc.add(new SortedNumericDocValuesField("dv", 1)); + doc.add(new SortedNumericDocValuesField("field", 1)); + iw.addDocument(doc); + doc.add(new SortedNumericDocValuesField("sndv", 1)); + doc.add(new SortedNumericDocValuesField("dv", 1)); + doc.add(new SortedNumericDocValuesField("field", 1)); + iw.addDocument(doc); + iw.forceMerge(1); + doc.add(new SortedNumericDocValuesField("sndv", 2)); + doc.add(new SortedNumericDocValuesField("dv", 2)); + doc.add(new SortedNumericDocValuesField("field", 2)); + iw.addDocument(doc); + doc.add(new SortedNumericDocValuesField("sndv", 2)); + doc.add(new SortedNumericDocValuesField("dv", 2)); + doc.add(new SortedNumericDocValuesField("field", 2)); + iw.addDocument(doc); + iw.forceMerge(1); + iw.close(); + + // TODO : validate star tree structures that got created + directory.close(); + } + + private XContentBuilder getExpandedMapping(String dim, String metric) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + b.field("max_leaf_docs", 100); + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "sndv"); + b.endObject(); + b.startObject(); + b.field("name", "dv"); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", "field"); + b.startArray("stats"); + b.value("sum"); + b.value("count"); // TODO : THIS TEST FAILS. + b.endArray(); + b.endObject(); + b.endArray(); + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("sndv"); + b.field("type", "integer"); + b.endObject(); + b.startObject("dv"); + b.field("type", "integer"); + b.endObject(); + b.startObject("field"); + b.field("type", "integer"); + b.endObject(); + b.endObject(); + }); + } + + private XContentBuilder topMapping(CheckedConsumer buildFields) throws IOException { + XContentBuilder builder = XContentFactory.jsonBuilder().startObject().startObject("_doc"); + buildFields.accept(builder); + return builder.endObject().endObject(); + } + + private MapperService createMapperService(XContentBuilder builder) throws IOException { + IndexMetadata indexMetadata = IndexMetadata.builder("test") + .settings( + Settings.builder() + .put(IndexMetadata.SETTING_VERSION_CREATED, Version.CURRENT) + .put(IndexMetadata.SETTING_NUMBER_OF_SHARDS, 1) + .put(IndexMetadata.SETTING_NUMBER_OF_REPLICAS, 1) + ) + .putMapping(builder.toString()) + .build(); + MapperService mapperService = MapperTestUtils.newMapperService( + new NamedXContentRegistry(ClusterModule.getNamedXWriteables()), + createTempDir(), + Settings.EMPTY, + "test" + ); + mapperService.merge(indexMetadata, MapperService.MergeReason.INDEX_TEMPLATE); + return mapperService; + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java new file mode 100644 index 0000000000000..e30e203406a6c --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/CountValueAggregatorTests.java @@ -0,0 +1,53 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.test.OpenSearchTestCase; + +public class CountValueAggregatorTests extends OpenSearchTestCase { + private final CountValueAggregator aggregator = new CountValueAggregator(); + + public void testGetAggregationType() { + assertEquals(MetricStat.COUNT.getTypeName(), aggregator.getAggregationType().getTypeName()); + } + + public void testGetAggregatedValueType() { + assertEquals(CountValueAggregator.VALUE_AGGREGATOR_TYPE, aggregator.getAggregatedValueType()); + } + + public void testGetInitialAggregatedValueForSegmentDocValue() { + assertEquals(1L, aggregator.getInitialAggregatedValueForSegmentDocValue(randomLong(), StarTreeNumericType.LONG), 0.0); + } + + public void testMergeAggregatedValueAndSegmentValue() { + assertEquals(3L, aggregator.mergeAggregatedValueAndSegmentValue(2L, 3L, StarTreeNumericType.LONG), 0.0); + } + + public void testMergeAggregatedValues() { + assertEquals(5L, aggregator.mergeAggregatedValues(2L, 3L), 0.0); + } + + public void testGetInitialAggregatedValue() { + assertEquals(3L, aggregator.getInitialAggregatedValue(3L), 0.0); + } + + public void testGetMaxAggregatedValueByteSize() { + assertEquals(Long.BYTES, aggregator.getMaxAggregatedValueByteSize()); + } + + public void testToLongValue() { + assertEquals(3L, aggregator.toLongValue(3L), 0.0); + } + + public void testToStarTreeNumericTypeValue() { + assertEquals(3L, aggregator.toStarTreeNumericTypeValue(3L, StarTreeNumericType.LONG), 0.0); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MaxValueAggregatorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MaxValueAggregatorTests.java new file mode 100644 index 0000000000000..194d667cffbd7 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MaxValueAggregatorTests.java @@ -0,0 +1,58 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.apache.lucene.util.NumericUtils; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.test.OpenSearchTestCase; + +public class MaxValueAggregatorTests extends OpenSearchTestCase { + private final MaxValueAggregator aggregator = new MaxValueAggregator(); + + public void testGetAggregationType() { + assertEquals(MetricStat.MAX.getTypeName(), aggregator.getAggregationType().getTypeName()); + } + + public void testGetAggregatedValueType() { + assertEquals(MaxValueAggregator.VALUE_AGGREGATOR_TYPE, aggregator.getAggregatedValueType()); + } + + public void testGetInitialAggregatedValueForSegmentDocValue() { + assertEquals(1.0, aggregator.getInitialAggregatedValueForSegmentDocValue(1L, StarTreeNumericType.LONG), 0.0); + assertThrows( + NullPointerException.class, + () -> aggregator.getInitialAggregatedValueForSegmentDocValue(null, StarTreeNumericType.DOUBLE) + ); + } + + public void testMergeAggregatedValueAndSegmentValue() { + assertEquals(3.0, aggregator.mergeAggregatedValueAndSegmentValue(2.0, 3L, StarTreeNumericType.LONG), 0.0); + } + + public void testMergeAggregatedValues() { + assertEquals(3.0, aggregator.mergeAggregatedValues(2.0, 3.0), 0.0); + } + + public void testGetInitialAggregatedValue() { + assertEquals(3.0, aggregator.getInitialAggregatedValue(3.0), 0.0); + } + + public void testGetMaxAggregatedValueByteSize() { + assertEquals(Double.BYTES, aggregator.getMaxAggregatedValueByteSize()); + } + + public void testToLongValue() { + assertEquals(NumericUtils.doubleToSortableLong(3.0), aggregator.toLongValue(3.0), 0.0); + } + + public void testToStarTreeNumericTypeValue() { + assertEquals(NumericUtils.sortableLongToDouble(3L), aggregator.toStarTreeNumericTypeValue(3L, StarTreeNumericType.DOUBLE), 0.0); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java new file mode 100644 index 0000000000000..73e6aeb44cfd7 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MetricAggregatorInfoTests.java @@ -0,0 +1,113 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.fielddata.IndexNumericFieldData; +import org.opensearch.test.OpenSearchTestCase; + +public class MetricAggregatorInfoTests extends OpenSearchTestCase { + + public void testConstructor() { + MetricAggregatorInfo pair = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + assertEquals(MetricStat.SUM, pair.getMetricStat()); + assertEquals("column1", pair.getField()); + } + + public void testCountStarConstructor() { + MetricAggregatorInfo pair = new MetricAggregatorInfo( + MetricStat.COUNT, + "anything", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + assertEquals(MetricStat.COUNT, pair.getMetricStat()); + assertEquals("anything", pair.getField()); + } + + public void testToFieldName() { + MetricAggregatorInfo pair = new MetricAggregatorInfo( + MetricStat.SUM, + "column2", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + assertEquals("star_tree_field_column2_sum", pair.toFieldName()); + } + + public void testEquals() { + MetricAggregatorInfo pair1 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + MetricAggregatorInfo pair2 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + assertEquals(pair1, pair2); + assertNotEquals( + pair1, + new MetricAggregatorInfo(MetricStat.COUNT, "column1", "star_tree_field", IndexNumericFieldData.NumericType.DOUBLE) + ); + assertNotEquals( + pair1, + new MetricAggregatorInfo(MetricStat.SUM, "column2", "star_tree_field", IndexNumericFieldData.NumericType.DOUBLE) + ); + } + + public void testHashCode() { + MetricAggregatorInfo pair1 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + MetricAggregatorInfo pair2 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + assertEquals(pair1.hashCode(), pair2.hashCode()); + } + + public void testCompareTo() { + MetricAggregatorInfo pair1 = new MetricAggregatorInfo( + MetricStat.SUM, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + MetricAggregatorInfo pair2 = new MetricAggregatorInfo( + MetricStat.SUM, + "column2", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + MetricAggregatorInfo pair3 = new MetricAggregatorInfo( + MetricStat.COUNT, + "column1", + "star_tree_field", + IndexNumericFieldData.NumericType.DOUBLE + ); + assertTrue(pair1.compareTo(pair2) < 0); + assertTrue(pair2.compareTo(pair1) > 0); + assertTrue(pair1.compareTo(pair3) > 0); + assertTrue(pair3.compareTo(pair1) < 0); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MinValueAggregatorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MinValueAggregatorTests.java new file mode 100644 index 0000000000000..8f8ce259bc62c --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/MinValueAggregatorTests.java @@ -0,0 +1,58 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.apache.lucene.util.NumericUtils; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.test.OpenSearchTestCase; + +public class MinValueAggregatorTests extends OpenSearchTestCase { + private final MinValueAggregator aggregator = new MinValueAggregator(); + + public void testGetAggregationType() { + assertEquals(MetricStat.MIN.getTypeName(), aggregator.getAggregationType().getTypeName()); + } + + public void testGetAggregatedValueType() { + assertEquals(MinValueAggregator.VALUE_AGGREGATOR_TYPE, aggregator.getAggregatedValueType()); + } + + public void testGetInitialAggregatedValueForSegmentDocValue() { + assertEquals(1.0, aggregator.getInitialAggregatedValueForSegmentDocValue(1L, StarTreeNumericType.LONG), 0.0); + assertThrows( + NullPointerException.class, + () -> aggregator.getInitialAggregatedValueForSegmentDocValue(null, StarTreeNumericType.DOUBLE) + ); + } + + public void testMergeAggregatedValueAndSegmentValue() { + assertEquals(2.0, aggregator.mergeAggregatedValueAndSegmentValue(2.0, 3L, StarTreeNumericType.LONG), 0.0); + } + + public void testMergeAggregatedValues() { + assertEquals(2.0, aggregator.mergeAggregatedValues(2.0, 3.0), 0.0); + } + + public void testGetInitialAggregatedValue() { + assertEquals(3.0, aggregator.getInitialAggregatedValue(3.0), 0.0); + } + + public void testGetMaxAggregatedValueByteSize() { + assertEquals(Double.BYTES, aggregator.getMaxAggregatedValueByteSize()); + } + + public void testToLongValue() { + assertEquals(NumericUtils.doubleToSortableLong(3.0), aggregator.toLongValue(3.0), 0.0); + } + + public void testToStarTreeNumericTypeValue() { + assertEquals(NumericUtils.sortableLongToDouble(3L), aggregator.toStarTreeNumericTypeValue(3L, StarTreeNumericType.DOUBLE), 0.0); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java new file mode 100644 index 0000000000000..3fb627e7cd434 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/SumValueAggregatorTests.java @@ -0,0 +1,72 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.apache.lucene.util.NumericUtils; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.test.OpenSearchTestCase; +import org.junit.Before; + +public class SumValueAggregatorTests extends OpenSearchTestCase { + + private SumValueAggregator aggregator; + + @Before + public void setup() { + aggregator = new SumValueAggregator(); + } + + public void testGetAggregationType() { + assertEquals(MetricStat.SUM.getTypeName(), aggregator.getAggregationType().getTypeName()); + } + + public void testGetAggregatedValueType() { + assertEquals(SumValueAggregator.VALUE_AGGREGATOR_TYPE, aggregator.getAggregatedValueType()); + } + + public void testGetInitialAggregatedValueForSegmentDocValue() { + assertEquals(1.0, aggregator.getInitialAggregatedValueForSegmentDocValue(1L, StarTreeNumericType.LONG), 0.0); + assertThrows( + NullPointerException.class, + () -> aggregator.getInitialAggregatedValueForSegmentDocValue(null, StarTreeNumericType.DOUBLE) + ); + } + + public void testMergeAggregatedValueAndSegmentValue() { + aggregator.getInitialAggregatedValue(2.0); + assertEquals(5.0, aggregator.mergeAggregatedValueAndSegmentValue(2.0, 3L, StarTreeNumericType.LONG), 0.0); + } + + public void testMergeAggregatedValueAndSegmentValue_nullSegmentDocValue() { + aggregator.getInitialAggregatedValue(2.0); + assertThrows(NullPointerException.class, () -> aggregator.mergeAggregatedValueAndSegmentValue(2.0, null, StarTreeNumericType.LONG)); + } + + public void testMergeAggregatedValues() { + aggregator.getInitialAggregatedValue(3.0); + assertEquals(5.0, aggregator.mergeAggregatedValues(2.0, 3.0), 0.0); + } + + public void testGetInitialAggregatedValue() { + assertEquals(3.14, aggregator.getInitialAggregatedValue(3.14), 0.0); + } + + public void testGetMaxAggregatedValueByteSize() { + assertEquals(Double.BYTES, aggregator.getMaxAggregatedValueByteSize()); + } + + public void testToLongValue() { + assertEquals(NumericUtils.doubleToSortableLong(3.14), aggregator.toLongValue(3.14), 0.0); + } + + public void testToStarTreeNumericTypeValue() { + assertEquals(NumericUtils.sortableLongToDouble(3L), aggregator.toStarTreeNumericTypeValue(3L, StarTreeNumericType.DOUBLE), 0.0); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java new file mode 100644 index 0000000000000..ce61ab839cc61 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/aggregators/ValueAggregatorFactoryTests.java @@ -0,0 +1,27 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.aggregators; + +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.numerictype.StarTreeNumericType; +import org.opensearch.test.OpenSearchTestCase; + +public class ValueAggregatorFactoryTests extends OpenSearchTestCase { + + public void testGetValueAggregatorForSumType() { + ValueAggregator aggregator = ValueAggregatorFactory.getValueAggregator(MetricStat.SUM); + assertNotNull(aggregator); + assertEquals(SumValueAggregator.class, aggregator.getClass()); + } + + public void testGetAggregatedValueTypeForSumType() { + StarTreeNumericType starTreeNumericType = ValueAggregatorFactory.getAggregatedValueType(MetricStat.SUM); + assertEquals(SumValueAggregator.VALUE_AGGREGATOR_TYPE, starTreeNumericType); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java new file mode 100644 index 0000000000000..b3b528eb55803 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/BaseStarTreeBuilderTests.java @@ -0,0 +1,254 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.apache.lucene.index.BaseStarTreeBuilder; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.FieldInfos; +import org.apache.lucene.index.IndexFileNames; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.SegmentInfo; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.store.Directory; +import org.apache.lucene.store.IndexOutput; +import org.apache.lucene.util.InfoStream; +import org.apache.lucene.util.Version; +import org.opensearch.common.settings.Settings; +import org.opensearch.index.codec.composite.Composite90DocValuesFormat; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.NumericDimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.compositeindex.datacube.startree.aggregators.MetricAggregatorInfo; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; +import org.opensearch.index.fielddata.IndexNumericFieldData; +import org.opensearch.index.mapper.ContentPath; +import org.opensearch.index.mapper.DocumentMapper; +import org.opensearch.index.mapper.Mapper; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.MappingLookup; +import org.opensearch.index.mapper.NumberFieldMapper; +import org.opensearch.test.OpenSearchTestCase; +import org.junit.BeforeClass; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.Collections; +import java.util.HashMap; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.atomic.AtomicInteger; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public class BaseStarTreeBuilderTests extends OpenSearchTestCase { + + private static BaseStarTreeBuilder builder; + private static MapperService mapperService; + private static List dimensionsOrder; + private static List fields = List.of( + "field1", + "field2", + "field3", + "field4", + "field5", + "field6", + "field7", + "field8", + "field9", + "field10" + ); + private static List metrics; + private static Directory directory; + private static FieldInfo[] fieldsInfo; + private static SegmentWriteState writeState; + private static StarTreeField starTreeField; + + private static IndexOutput dataOut; + private static IndexOutput metaOut; + + @BeforeClass + public static void setup() throws IOException { + + dimensionsOrder = List.of( + new NumericDimension("field1"), + new NumericDimension("field3"), + new NumericDimension("field5"), + new NumericDimension("field8") + ); + metrics = List.of(new Metric("field2", List.of(MetricStat.SUM)), new Metric("field4", List.of(MetricStat.SUM))); + + starTreeField = new StarTreeField( + "test", + dimensionsOrder, + metrics, + new StarTreeFieldConfiguration(1, Set.of("field8"), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP) + ); + + DocValuesProducer docValuesProducer = mock(DocValuesProducer.class); + directory = newFSDirectory(createTempDir()); + SegmentInfo segmentInfo = new SegmentInfo( + directory, + Version.LATEST, + Version.LUCENE_9_11_0, + "test_segment", + 5, + false, + false, + new Lucene99Codec(), + new HashMap<>(), + UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), + new HashMap<>(), + null + ); + + fieldsInfo = new FieldInfo[fields.size()]; + Map fieldProducerMap = new HashMap<>(); + for (int i = 0; i < fieldsInfo.length; i++) { + fieldsInfo[i] = new FieldInfo( + fields.get(i), + i, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + fieldProducerMap.put(fields.get(i), docValuesProducer); + } + FieldInfos fieldInfos = new FieldInfos(fieldsInfo); + writeState = new SegmentWriteState(InfoStream.getDefault(), segmentInfo.dir, segmentInfo, fieldInfos, null, newIOContext(random())); + + String dataFileName = IndexFileNames.segmentFileName( + writeState.segmentInfo.name, + writeState.segmentSuffix, + Composite90DocValuesFormat.DATA_EXTENSION + ); + dataOut = writeState.directory.createOutput(dataFileName, writeState.context); + + String metaFileName = IndexFileNames.segmentFileName( + writeState.segmentInfo.name, + writeState.segmentSuffix, + Composite90DocValuesFormat.META_EXTENSION + ); + metaOut = writeState.directory.createOutput(metaFileName, writeState.context); + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + + builder = new BaseStarTreeBuilder(metaOut, dataOut, starTreeField, writeState, mapperService) { + @Override + public void build( + List starTreeValuesSubs, + AtomicInteger fieldNumberAcrossStarTrees, + DocValuesConsumer starTreeDocValuesConsumer + ) throws IOException {} + + @Override + public void appendStarTreeDocument(StarTreeDocument starTreeDocument) throws IOException {} + + @Override + public StarTreeDocument getStarTreeDocument(int docId) throws IOException { + return null; + } + + @Override + public List getStarTreeDocuments() { + return List.of(); + } + + @Override + public Long getDimensionValue(int docId, int dimensionId) throws IOException { + return 0L; + } + + @Override + public Iterator sortAndAggregateSegmentDocuments( + int numDocs, + SequentialDocValuesIterator[] dimensionReaders, + List metricReaders + ) throws IOException { + return null; + } + + @Override + public Iterator generateStarTreeDocumentsForStarNode(int startDocId, int endDocId, int dimensionId) + throws IOException { + return null; + } + }; + } + + public void test_generateMetricAggregatorInfos() { + List metricAggregatorInfos = builder.generateMetricAggregatorInfos(mapperService); + List expectedMetricAggregatorInfos = List.of( + new MetricAggregatorInfo(MetricStat.SUM, "field2", starTreeField.getName(), IndexNumericFieldData.NumericType.DOUBLE), + new MetricAggregatorInfo(MetricStat.SUM, "field4", starTreeField.getName(), IndexNumericFieldData.NumericType.DOUBLE) + ); + assertEquals(metricAggregatorInfos, expectedMetricAggregatorInfos); + } + + public void test_reduceStarTreeDocuments() { + StarTreeDocument starTreeDocument1 = new StarTreeDocument(new Long[] { 1L, 3L, 5L, 8L }, new Double[] { 4.0, 8.0 }); + StarTreeDocument starTreeDocument2 = new StarTreeDocument(new Long[] { 1L, 3L, 5L, 8L }, new Double[] { 10.0, 6.0 }); + + StarTreeDocument expectedeMergedStarTreeDocument = new StarTreeDocument(new Long[] { 1L, 3L, 5L, 8L }, new Double[] { 14.0, 14.0 }); + StarTreeDocument mergedStarTreeDocument = builder.reduceStarTreeDocuments(null, starTreeDocument1); + StarTreeDocument resultStarTreeDocument = builder.reduceStarTreeDocuments(mergedStarTreeDocument, starTreeDocument2); + + assertEquals(resultStarTreeDocument.metrics[0], expectedeMergedStarTreeDocument.metrics[0]); + assertEquals(resultStarTreeDocument.metrics[1], expectedeMergedStarTreeDocument.metrics[1]); + } + + @Override + public void tearDown() throws Exception { + super.tearDown(); + dataOut.close(); + metaOut.close(); + directory.close(); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java new file mode 100644 index 0000000000000..1e655d7b3d455 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/OnHeapStarTreeBuilderTests.java @@ -0,0 +1,806 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesConsumer; +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.codecs.lucene99.Lucene99Codec; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.FieldInfos; +import org.apache.lucene.index.IndexFileNames; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.SegmentInfo; +import org.apache.lucene.index.SegmentWriteState; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.sandbox.document.HalfFloatPoint; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.store.Directory; +import org.apache.lucene.store.IndexOutput; +import org.apache.lucene.util.InfoStream; +import org.apache.lucene.util.NumericUtils; +import org.apache.lucene.util.Version; +import org.opensearch.common.settings.Settings; +import org.opensearch.index.codec.composite.Composite90DocValuesFormat; +import org.opensearch.index.codec.composite.datacube.startree.StarTreeValues; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.NumericDimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeDocument; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.opensearch.index.mapper.ContentPath; +import org.opensearch.index.mapper.DocumentMapper; +import org.opensearch.index.mapper.Mapper; +import org.opensearch.index.mapper.MapperService; +import org.opensearch.index.mapper.MappingLookup; +import org.opensearch.index.mapper.NumberFieldMapper; +import org.opensearch.test.OpenSearchTestCase; +import org.junit.Before; + +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.Collections; +import java.util.HashMap; +import java.util.HashSet; +import java.util.Iterator; +import java.util.List; +import java.util.Map; +import java.util.Set; +import java.util.UUID; +import java.util.concurrent.atomic.AtomicInteger; + +import static org.mockito.Mockito.mock; +import static org.mockito.Mockito.when; + +public class OnHeapStarTreeBuilderTests extends OpenSearchTestCase { + + private OnHeapStarTreeBuilder builder; + private MapperService mapperService; + private List dimensionsOrder; + private List fields = List.of( + "field1", + "field2", + "field3", + "field4", + "field5", + "field6", + "field7", + "field8", + "field9", + "field10" + ); + private List metrics; + private Directory directory; + private FieldInfo[] fieldsInfo; + private StarTreeField compositeField; + private Map fieldProducerMap; + private SegmentWriteState writeState; + private IndexOutput dataOut; + private IndexOutput metaOut; + private DocValuesConsumer docValuesConsumer; + + @Before + public void setup() throws IOException { + dimensionsOrder = List.of( + new NumericDimension("field1"), + new NumericDimension("field3"), + new NumericDimension("field5"), + new NumericDimension("field8") + ); + metrics = List.of( + new Metric("field2", List.of(MetricStat.SUM)), + new Metric("field4", List.of(MetricStat.SUM)), + new Metric("field6", List.of(MetricStat.COUNT)), + new Metric("field9", List.of(MetricStat.MIN)), + new Metric("field10", List.of(MetricStat.MAX)) + ); + + DocValuesProducer docValuesProducer = mock(DocValuesProducer.class); + docValuesConsumer = mock(DocValuesConsumer.class); + + compositeField = new StarTreeField( + "test", + dimensionsOrder, + metrics, + new StarTreeFieldConfiguration(1, Set.of("field8"), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP) + ); + directory = newFSDirectory(createTempDir()); + SegmentInfo segmentInfo = new SegmentInfo( + directory, + Version.LATEST, + Version.LUCENE_9_11_0, + "test_segment", + 5, + false, + false, + new Lucene99Codec(), + new HashMap<>(), + UUID.randomUUID().toString().substring(0, 16).getBytes(StandardCharsets.UTF_8), + new HashMap<>(), + null + ); + + fieldsInfo = new FieldInfo[fields.size()]; + fieldProducerMap = new HashMap<>(); + for (int i = 0; i < fieldsInfo.length; i++) { + fieldsInfo[i] = new FieldInfo( + fields.get(i), + i, + false, + false, + true, + IndexOptions.DOCS_AND_FREQS_AND_POSITIONS_AND_OFFSETS, + DocValuesType.SORTED_NUMERIC, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + fieldProducerMap.put(fields.get(i), docValuesProducer); + } + FieldInfos fieldInfos = new FieldInfos(fieldsInfo); + writeState = new SegmentWriteState(InfoStream.getDefault(), segmentInfo.dir, segmentInfo, fieldInfos, null, newIOContext(random())); + + String dataFileName = IndexFileNames.segmentFileName( + writeState.segmentInfo.name, + writeState.segmentSuffix, + Composite90DocValuesFormat.DATA_EXTENSION + ); + dataOut = writeState.directory.createOutput(dataFileName, writeState.context); + + String metaFileName = IndexFileNames.segmentFileName( + writeState.segmentInfo.name, + writeState.segmentSuffix, + Composite90DocValuesFormat.META_EXTENSION + ); + metaOut = writeState.directory.createOutput(metaFileName, writeState.context); + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper4 = new NumberFieldMapper.Builder("field9", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper5 = new NumberFieldMapper.Builder("field10", NumberFieldMapper.NumberType.DOUBLE, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3, numberFieldMapper4, numberFieldMapper5), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + builder = new OnHeapStarTreeBuilder(metaOut, dataOut, compositeField, writeState, mapperService); + } + + public void test_sortAndAggregateStarTreeDocuments() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble(), 8.0, 20.0 }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble(), 12.0, 10.0 }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble(), 6.0, 24.0 }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble(), 9.0, 12.0 }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble(), 8.0, 13.0 }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L, 8.0, 20.0 }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L, 6.0, 24.0 }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long[] metrics = new Long[starTreeDocuments[0].metrics.length]; + for (int j = 0; j < metrics.length; j++) { + metrics[j] = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[j]); + } + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, metrics); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + int numOfAggregatedDocuments = 0; + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + for (int dim = 0; dim < 4; dim++) { + assertEquals(expectedStarTreeDocument.dimensions[dim], resultStarTreeDocument.dimensions[dim]); + } + + for (int met = 0; met < 5; met++) { + assertEquals(expectedStarTreeDocument.metrics[met], resultStarTreeDocument.metrics[met]); + } + numOfAggregatedDocuments++; + } + assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); + + } + + public void test_sortAndAggregateStarTreeDocuments_nullMetric() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble(), 8.0, 20.0 }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble(), 8.0, 20.0 }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble(), 8.0, 20.0 }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble(), 9.0, 12.0 }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, null, randomDouble(), 8.0, 20.0 }); + StarTreeDocument expectedStarTreeDocument = new StarTreeDocument( + new Long[] { 2L, 4L, 3L, 4L }, + new Object[] { 21.0, 14.0, 2L, 8.0, 20.0 } + ); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long[] metrics = new Long[starTreeDocuments[0].metrics.length]; + for (int j = 0; j < metrics.length; j++) { + metrics[j] = starTreeDocuments[i].metrics[j] != null + ? NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[j]) + : null; + } + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, metrics); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + for (int dim = 0; dim < 4; dim++) { + assertEquals(expectedStarTreeDocument.dimensions[dim], resultStarTreeDocument.dimensions[dim]); + } + + for (int met = 0; met < 5; met++) { + assertEquals(expectedStarTreeDocument.metrics[met], resultStarTreeDocument.metrics[met]); + } + + assertThrows( + "Null metric should have resulted in IllegalStateException", + IllegalStateException.class, + segmentStarTreeDocumentIterator::next + ); + + } + + public void test_sortAndAggregateStarTreeDocument_longMaxAndLongMinDimensions() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument( + new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, + new Double[] { 12.0, 10.0, randomDouble(), 8.0, 20.0 } + ); + starTreeDocuments[1] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, + new Double[] { 10.0, 6.0, randomDouble(), 12.0, 10.0 } + ); + starTreeDocuments[2] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, + new Double[] { 14.0, 12.0, randomDouble(), 6.0, 24.0 } + ); + starTreeDocuments[3] = new StarTreeDocument( + new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, + new Double[] { 9.0, 4.0, randomDouble(), 9.0, 12.0 } + ); + starTreeDocuments[4] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, + new Double[] { 11.0, 16.0, randomDouble(), 8.0, 13.0 } + ); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { Long.MIN_VALUE, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L, 8.0, 20.0 }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, Long.MAX_VALUE }, new Object[] { 35.0, 34.0, 3L, 6.0, 24.0 }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long[] metrics = new Long[starTreeDocuments[0].metrics.length]; + for (int j = 0; j < metrics.length; j++) { + metrics[j] = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[j]); + } + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, metrics); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + int numOfAggregatedDocuments = 0; + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + for (int dim = 0; dim < 4; dim++) { + assertEquals(expectedStarTreeDocument.dimensions[dim], resultStarTreeDocument.dimensions[dim]); + } + + for (int met = 0; met < 5; met++) { + assertEquals(expectedStarTreeDocument.metrics[met], resultStarTreeDocument.metrics[met]); + } + + numOfAggregatedDocuments++; + } + + assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); + + } + + public void test_build_DoubleMaxAndDoubleMinMetrics() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument( + new Long[] { 2L, 4L, 3L, 4L }, + new Double[] { Double.MAX_VALUE, 10.0, randomDouble(), 8.0, 20.0 } + ); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble(), 12.0, 10.0 }); + starTreeDocuments[2] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new Double[] { 14.0, Double.MIN_VALUE, randomDouble(), 6.0, 24.0 } + ); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble(), 9.0, 12.0 }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble(), 8.0, 13.0 }); + + List inorderStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { Double.MAX_VALUE + 9, 14.0, 2L, 8.0, 20.0 }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, Double.MIN_VALUE + 22, 3L, 6.0, 24.0 }) + ); + Iterator expectedStarTreeDocumentIterator = inorderStarTreeDocuments.iterator(); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long[] metrics = new Long[starTreeDocuments[0].metrics.length]; + for (int j = 0; j < metrics.length; j++) { + metrics[j] = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[j]); + } + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, metrics); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + int numOfAggregatedDocuments = 0; + while (segmentStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = segmentStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + for (int dim = 0; dim < 4; dim++) { + assertEquals(expectedStarTreeDocument.dimensions[dim], resultStarTreeDocument.dimensions[dim]); + } + + for (int met = 0; met < 5; met++) { + assertEquals(expectedStarTreeDocument.metrics[met], resultStarTreeDocument.metrics[met]); + } + + numOfAggregatedDocuments++; + } + + assertEquals(inorderStarTreeDocuments.size(), numOfAggregatedDocuments); + + } + + public void test_build_halfFloatMetrics() throws IOException { + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper4 = new NumberFieldMapper.Builder("field9", NumberFieldMapper.NumberType.HALF_FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper5 = new NumberFieldMapper.Builder( + "field10", + NumberFieldMapper.NumberType.HALF_FLOAT, + false, + true + ).build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3, numberFieldMapper4, numberFieldMapper5), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + builder = new OnHeapStarTreeBuilder(metaOut, dataOut, compositeField, writeState, mapperService); + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument( + new Long[] { 2L, 4L, 3L, 4L }, + new HalfFloatPoint[] { + new HalfFloatPoint("hf1", 12), + new HalfFloatPoint("hf6", 10), + new HalfFloatPoint("field6", 10), + new HalfFloatPoint("field9", 8), + new HalfFloatPoint("field10", 20) } + ); + starTreeDocuments[1] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new HalfFloatPoint[] { + new HalfFloatPoint("hf2", 10), + new HalfFloatPoint("hf7", 6), + new HalfFloatPoint("field6", 10), + new HalfFloatPoint("field9", 12), + new HalfFloatPoint("field10", 10) } + ); + starTreeDocuments[2] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new HalfFloatPoint[] { + new HalfFloatPoint("hf3", 14), + new HalfFloatPoint("hf8", 12), + new HalfFloatPoint("field6", 10), + new HalfFloatPoint("field9", 6), + new HalfFloatPoint("field10", 24) } + ); + starTreeDocuments[3] = new StarTreeDocument( + new Long[] { 2L, 4L, 3L, 4L }, + new HalfFloatPoint[] { + new HalfFloatPoint("hf4", 9), + new HalfFloatPoint("hf9", 4), + new HalfFloatPoint("field6", 10), + new HalfFloatPoint("field9", 9), + new HalfFloatPoint("field10", 12) } + ); + starTreeDocuments[4] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new HalfFloatPoint[] { + new HalfFloatPoint("hf5", 11), + new HalfFloatPoint("hf10", 16), + new HalfFloatPoint("field6", 10), + new HalfFloatPoint("field9", 8), + new HalfFloatPoint("field10", 13) } + ); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long[] metrics = new Long[starTreeDocuments[0].metrics.length]; + for (int j = 0; j < metrics.length; j++) { + metrics[j] = (long) HalfFloatPoint.halfFloatToSortableShort( + ((HalfFloatPoint) starTreeDocuments[i].metrics[j]).numericValue().floatValue() + ); + } + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, metrics); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + builder.build(segmentStarTreeDocumentIterator, new AtomicInteger(), docValuesConsumer); + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(8, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + public void test_build_floatMetrics() throws IOException { + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper4 = new NumberFieldMapper.Builder("field9", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper5 = new NumberFieldMapper.Builder("field10", NumberFieldMapper.NumberType.FLOAT, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3, numberFieldMapper4, numberFieldMapper5), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + builder = new OnHeapStarTreeBuilder(metaOut, dataOut, compositeField, writeState, mapperService); + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument( + new Long[] { 2L, 4L, 3L, 4L }, + new Float[] { 12.0F, 10.0F, randomFloat(), 8.0F, 20.0F } + ); + starTreeDocuments[1] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new Float[] { 10.0F, 6.0F, randomFloat(), 12.0F, 10.0F } + ); + starTreeDocuments[2] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new Float[] { 14.0F, 12.0F, randomFloat(), 6.0F, 24.0F } + ); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Float[] { 9.0F, 4.0F, randomFloat(), 9.0F, 12.0F }); + starTreeDocuments[4] = new StarTreeDocument( + new Long[] { 3L, 4L, 2L, 1L }, + new Float[] { 11.0F, 16.0F, randomFloat(), 8.0F, 13.0F } + ); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long[] metrics = new Long[starTreeDocuments[0].metrics.length]; + for (int j = 0; j < metrics.length; j++) { + metrics[j] = (long) NumericUtils.floatToSortableInt((Float) starTreeDocuments[i].metrics[j]); + } + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, metrics); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + builder.build(segmentStarTreeDocumentIterator, new AtomicInteger(), docValuesConsumer); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(8, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + public void test_build_longMetrics() throws IOException { + + mapperService = mock(MapperService.class); + DocumentMapper documentMapper = mock(DocumentMapper.class); + when(mapperService.documentMapper()).thenReturn(documentMapper); + Settings settings = Settings.builder().put(settings(org.opensearch.Version.CURRENT).build()).build(); + NumberFieldMapper numberFieldMapper1 = new NumberFieldMapper.Builder("field2", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper2 = new NumberFieldMapper.Builder("field4", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper3 = new NumberFieldMapper.Builder("field6", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper4 = new NumberFieldMapper.Builder("field9", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + NumberFieldMapper numberFieldMapper5 = new NumberFieldMapper.Builder("field10", NumberFieldMapper.NumberType.LONG, false, true) + .build(new Mapper.BuilderContext(settings, new ContentPath())); + MappingLookup fieldMappers = new MappingLookup( + Set.of(numberFieldMapper1, numberFieldMapper2, numberFieldMapper3, numberFieldMapper4, numberFieldMapper5), + Collections.emptyList(), + Collections.emptyList(), + 0, + null + ); + when(documentMapper.mappers()).thenReturn(fieldMappers); + builder = new OnHeapStarTreeBuilder(metaOut, dataOut, compositeField, writeState, mapperService); + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Long[] { 12L, 10L, randomLong(), 8L, 20L }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 10L, 6L, randomLong(), 12L, 10L }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 14L, 12L, randomLong(), 6L, 24L }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Long[] { 9L, 4L, randomLong(), 9L, 12L }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Long[] { 11L, 16L, randomLong(), 8L, 13L }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long[] metrics = new Long[starTreeDocuments[0].metrics.length]; + for (int j = 0; j < metrics.length; j++) { + metrics[j] = (Long) starTreeDocuments[i].metrics[j]; + } + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, metrics); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + builder.build(segmentStarTreeDocumentIterator, new AtomicInteger(), docValuesConsumer); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(8, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + private static Iterator getExpectedStarTreeDocumentIterator() { + List expectedStarTreeDocuments = List.of( + new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }), + new StarTreeDocument(new Long[] { null, 4L, 2L, 1L }, new Object[] { 35.0, 34.0, 3L }), + new StarTreeDocument(new Long[] { null, 4L, 3L, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { null, 4L, null, 1L }, new Object[] { 35.0, 34.0, 3L }), + new StarTreeDocument(new Long[] { null, 4L, null, 4L }, new Object[] { 21.0, 14.0, 2L }), + new StarTreeDocument(new Long[] { null, 4L, null, null }, new Object[] { 56.0, 48.0, 5L }), + new StarTreeDocument(new Long[] { null, null, null, null }, new Object[] { 56.0, 48.0, 5L }) + ); + return expectedStarTreeDocuments.iterator(); + } + + public void test_build() throws IOException { + + int noOfStarTreeDocuments = 5; + StarTreeDocument[] starTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + + starTreeDocuments[0] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 12.0, 10.0, randomDouble(), 8.0, 20.0 }); + starTreeDocuments[1] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 10.0, 6.0, randomDouble(), 12.0, 10.0 }); + starTreeDocuments[2] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 14.0, 12.0, randomDouble(), 6.0, 24.0 }); + starTreeDocuments[3] = new StarTreeDocument(new Long[] { 2L, 4L, 3L, 4L }, new Double[] { 9.0, 4.0, randomDouble(), 9.0, 12.0 }); + starTreeDocuments[4] = new StarTreeDocument(new Long[] { 3L, 4L, 2L, 1L }, new Double[] { 11.0, 16.0, randomDouble(), 8.0, 13.0 }); + + StarTreeDocument[] segmentStarTreeDocuments = new StarTreeDocument[noOfStarTreeDocuments]; + for (int i = 0; i < noOfStarTreeDocuments; i++) { + Long[] metrics = new Long[starTreeDocuments[0].metrics.length]; + for (int j = 0; j < metrics.length; j++) { + metrics[j] = NumericUtils.doubleToSortableLong((Double) starTreeDocuments[i].metrics[j]); + } + segmentStarTreeDocuments[i] = new StarTreeDocument(starTreeDocuments[i].dimensions, metrics); + } + + Iterator segmentStarTreeDocumentIterator = builder.sortAndAggregateStarTreeDocuments(segmentStarTreeDocuments); + builder.build(segmentStarTreeDocumentIterator, new AtomicInteger(), docValuesConsumer); + + List resultStarTreeDocuments = builder.getStarTreeDocuments(); + assertEquals(8, resultStarTreeDocuments.size()); + + Iterator expectedStarTreeDocumentIterator = getExpectedStarTreeDocumentIterator(); + assertStarTreeDocuments(resultStarTreeDocuments, expectedStarTreeDocumentIterator); + } + + private static void assertStarTreeDocuments( + List resultStarTreeDocuments, + Iterator expectedStarTreeDocumentIterator + ) { + Iterator resultStarTreeDocumentIterator = resultStarTreeDocuments.iterator(); + while (resultStarTreeDocumentIterator.hasNext() && expectedStarTreeDocumentIterator.hasNext()) { + StarTreeDocument resultStarTreeDocument = resultStarTreeDocumentIterator.next(); + StarTreeDocument expectedStarTreeDocument = expectedStarTreeDocumentIterator.next(); + + assertEquals(expectedStarTreeDocument.dimensions[0], resultStarTreeDocument.dimensions[0]); + assertEquals(expectedStarTreeDocument.dimensions[1], resultStarTreeDocument.dimensions[1]); + assertEquals(expectedStarTreeDocument.dimensions[2], resultStarTreeDocument.dimensions[2]); + assertEquals(expectedStarTreeDocument.dimensions[3], resultStarTreeDocument.dimensions[3]); + assertEquals(expectedStarTreeDocument.metrics[0], resultStarTreeDocument.metrics[0]); + assertEquals(expectedStarTreeDocument.metrics[1], resultStarTreeDocument.metrics[1]); + assertEquals(expectedStarTreeDocument.metrics[2], resultStarTreeDocument.metrics[2]); + } + } + + public void testMergeFlow() throws IOException { + List dimList = List.of(0L, 1L, 3L, 4L, 5L); + List docsWithField = List.of(0, 1, 3, 4, 5); + List dimList2 = List.of(0L, 1L, 2L, 3L, 4L, 5L); + List docsWithField2 = List.of(0, 1, 2, 3, 4, 5); + + List metricsList = List.of( + getLongFromDouble(0.0), + getLongFromDouble(10.0), + getLongFromDouble(20.0), + getLongFromDouble(30.0), + getLongFromDouble(40.0), + getLongFromDouble(50.0) + ); + List metricsWithField = List.of(0, 1, 2, 3, 4, 5); + + Dimension d1 = new NumericDimension("field1"); + Dimension d2 = new NumericDimension("field3"); + Metric m1 = new Metric("field2", List.of(MetricStat.SUM)); + List dims = List.of(d1, d2); + List metrics = List.of(m1); + StarTreeFieldConfiguration c = new StarTreeFieldConfiguration( + 1000, + new HashSet<>(), + StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP + ); + StarTreeField sf = new StarTreeField("sf", dims, metrics, c); + SortedNumericDocValues d1sndv = getSortedNumericMock(dimList, docsWithField); + SortedNumericDocValues d2sndv = getSortedNumericMock(dimList2, docsWithField2); + SortedNumericDocValues m1sndv = getSortedNumericMock(metricsList, metricsWithField); + Map dimDocIdSetIterators = Map.of("field1", d1sndv, "field3", d2sndv); + Map metricDocIdSetIterators = Map.of("field2", m1sndv); + StarTreeValues starTreeValues = new StarTreeValues(sf, null, dimDocIdSetIterators, metricDocIdSetIterators); + + SortedNumericDocValues f2d1sndv = getSortedNumericMock(dimList, docsWithField); + SortedNumericDocValues f2d2sndv = getSortedNumericMock(dimList2, docsWithField2); + SortedNumericDocValues f2m1sndv = getSortedNumericMock(metricsList, metricsWithField); + Map f2dimDocIdSetIterators = Map.of("field1", f2d1sndv, "field3", f2d2sndv); + Map f2metricDocIdSetIterators = Map.of("field2", f2m1sndv); + StarTreeValues starTreeValues2 = new StarTreeValues(sf, null, f2dimDocIdSetIterators, f2metricDocIdSetIterators); + OnHeapStarTreeBuilder builder = new OnHeapStarTreeBuilder(metaOut, dataOut, sf, writeState, mapperService); + Iterator starTreeDocumentIterator = builder.mergeStarTrees(List.of(starTreeValues, starTreeValues2)); + + /** + * Asserting following dim / metrics [ dim1, dim2 / Sum [ metric] ] + * [0, 0] | [0.0] + * [1, 1] | [20.0] + * [3, 3] | [60.0] + * [4, 4] | [80.0] + * [5, 5] | [100.0] + * [null, 2] | [40.0] + */ + + while (starTreeDocumentIterator.hasNext()) { + StarTreeDocument starTreeDocument = starTreeDocumentIterator.next(); + assertEquals( + starTreeDocument.dimensions[0] != null ? starTreeDocument.dimensions[0] * 2 * 10.0 : 40.0, + starTreeDocument.metrics[0] + ); + } + } + + private Long getLongFromDouble(Double num) { + if (num == null) { + return null; + } + return NumericUtils.doubleToSortableLong(num); + } + + private SortedNumericDocValues getSortedNumericMock(List dimList, List docsWithField) { + return new SortedNumericDocValues() { + int index = -1; + + @Override + public long nextValue() throws IOException { + return dimList.get(index); + } + + @Override + public int docValueCount() { + return 0; + } + + @Override + public boolean advanceExact(int target) throws IOException { + return false; + } + + @Override + public int docID() { + return index; + } + + @Override + public int nextDoc() throws IOException { + if (index == docsWithField.size() - 1) { + return NO_MORE_DOCS; + } + index++; + return docsWithField.get(index); + } + + @Override + public int advance(int target) throws IOException { + return 0; + } + + @Override + public long cost() { + return 0; + } + }; + } + + @Override + public void tearDown() throws Exception { + super.tearDown(); + dataOut.close(); + metaOut.close(); + directory.close(); + } +} diff --git a/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/SequentialIteratorTests.java b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/SequentialIteratorTests.java new file mode 100644 index 0000000000000..8f9943a4e73ef --- /dev/null +++ b/server/src/test/java/org/opensearch/index/compositeindex/datacube/startree/builder/SequentialIteratorTests.java @@ -0,0 +1,133 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.compositeindex.datacube.startree.builder; + +import org.apache.lucene.codecs.DocValuesProducer; +import org.apache.lucene.index.BinaryDocValues; +import org.apache.lucene.index.DocValuesType; +import org.apache.lucene.index.FieldInfo; +import org.apache.lucene.index.IndexOptions; +import org.apache.lucene.index.SortedNumericDocValues; +import org.apache.lucene.index.VectorEncoding; +import org.apache.lucene.index.VectorSimilarityFunction; +import org.apache.lucene.search.DocIdSetIterator; +import org.apache.lucene.util.BytesRef; +import org.opensearch.index.compositeindex.datacube.startree.utils.SequentialDocValuesIterator; +import org.opensearch.test.OpenSearchTestCase; +import org.junit.BeforeClass; + +import java.io.IOException; +import java.util.Collections; + +import org.mockito.Mockito; + +import static org.mockito.Mockito.when; + +public class SequentialIteratorTests extends OpenSearchTestCase { + + private static FieldInfo mockFieldInfo; + + @BeforeClass + public static void setup() { + mockFieldInfo = new FieldInfo( + "field", + 1, + false, + false, + true, + IndexOptions.NONE, + DocValuesType.NONE, + -1, + Collections.emptyMap(), + 0, + 0, + 0, + 0, + VectorEncoding.FLOAT32, + VectorSimilarityFunction.EUCLIDEAN, + false, + false + ); + } + + public void testCreateIterator_SortedNumeric() throws IOException { + DocValuesProducer producer = Mockito.mock(DocValuesProducer.class); + SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); + when(producer.getSortedNumeric(mockFieldInfo)).thenReturn(iterator); + SequentialDocValuesIterator result = new SequentialDocValuesIterator(producer.getSortedNumeric(mockFieldInfo)); + assertEquals(iterator.getClass(), result.getDocIdSetIterator().getClass()); + } + + public void testCreateIterator_UnsupportedType() throws IOException { + DocValuesProducer producer = Mockito.mock(DocValuesProducer.class); + BinaryDocValues iterator = Mockito.mock(BinaryDocValues.class); + when(producer.getBinary(mockFieldInfo)).thenReturn(iterator); + SequentialDocValuesIterator result = new SequentialDocValuesIterator(producer.getBinary(mockFieldInfo)); + assertEquals(iterator.getClass(), result.getDocIdSetIterator().getClass()); + when(iterator.nextDoc()).thenReturn(0); + when(iterator.binaryValue()).thenReturn(new BytesRef("123")); + + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { + result.nextDoc(0); + result.value(0); + }); + assertEquals("Unsupported Iterator requested for SequentialDocValuesIterator", exception.getMessage()); + } + + public void testGetNextValue_SortedNumeric() throws IOException { + SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); + when(iterator.nextDoc()).thenReturn(0); + when(iterator.nextValue()).thenReturn(123L); + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); + sequentialDocValuesIterator.nextDoc(0); + long result = sequentialDocValuesIterator.value(0); + assertEquals(123L, result); + } + + public void testGetNextValue_UnsupportedIterator() { + DocIdSetIterator iterator = Mockito.mock(DocIdSetIterator.class); + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); + + IllegalStateException exception = expectThrows(IllegalStateException.class, () -> { sequentialDocValuesIterator.value(0); }); + assertEquals("Unsupported Iterator requested for SequentialDocValuesIterator", exception.getMessage()); + } + + public void testNextDoc() throws IOException { + SortedNumericDocValues iterator = Mockito.mock(SortedNumericDocValues.class); + SequentialDocValuesIterator sequentialDocValuesIterator = new SequentialDocValuesIterator(iterator); + when(iterator.nextDoc()).thenReturn(5); + + int result = sequentialDocValuesIterator.nextDoc(5); + assertEquals(5, result); + } + + public void test_multipleCoordinatedDocumentReader() throws IOException { + SortedNumericDocValues iterator1 = Mockito.mock(SortedNumericDocValues.class); + SortedNumericDocValues iterator2 = Mockito.mock(SortedNumericDocValues.class); + + SequentialDocValuesIterator sequentialDocValuesIterator1 = new SequentialDocValuesIterator(iterator1); + SequentialDocValuesIterator sequentialDocValuesIterator2 = new SequentialDocValuesIterator(iterator2); + + when(iterator1.nextDoc()).thenReturn(0); + when(iterator2.nextDoc()).thenReturn(1); + + when(iterator1.nextValue()).thenReturn(9L); + when(iterator2.nextValue()).thenReturn(9L); + + sequentialDocValuesIterator1.nextDoc(0); + sequentialDocValuesIterator2.nextDoc(0); + assertEquals(0, sequentialDocValuesIterator1.getDocId()); + assertEquals(9L, (long) sequentialDocValuesIterator1.value(0)); + assertNotEquals(0, sequentialDocValuesIterator2.getDocId()); + assertEquals(1, sequentialDocValuesIterator2.getDocId()); + assertEquals(9L, (long) sequentialDocValuesIterator2.value(1)); + + } + +} diff --git a/server/src/test/java/org/opensearch/index/get/GetResultTests.java b/server/src/test/java/org/opensearch/index/get/GetResultTests.java index 64b14744a40d2..ef8c48f2753c7 100644 --- a/server/src/test/java/org/opensearch/index/get/GetResultTests.java +++ b/server/src/test/java/org/opensearch/index/get/GetResultTests.java @@ -35,12 +35,16 @@ import org.opensearch.common.collect.Tuple; import org.opensearch.common.document.DocumentField; import org.opensearch.common.io.stream.BytesStreamOutput; +import org.opensearch.common.xcontent.LoggingDeprecationHandler; import org.opensearch.common.xcontent.XContentType; +import org.opensearch.common.xcontent.json.JsonXContent; +import org.opensearch.core.common.ParsingException; import org.opensearch.core.common.Strings; import org.opensearch.core.common.bytes.BytesArray; import org.opensearch.core.common.bytes.BytesReference; import org.opensearch.core.xcontent.MediaType; import org.opensearch.core.xcontent.MediaTypeRegistry; +import org.opensearch.core.xcontent.NamedXContentRegistry; import org.opensearch.core.xcontent.ToXContent; import org.opensearch.core.xcontent.XContentParser; import org.opensearch.index.mapper.IdFieldMapper; @@ -220,6 +224,22 @@ public void testEqualsAndHashcode() { ); } + public void testFromXContentEmbeddedFoundParsingException() throws IOException { + String json = "{\"_index\":\"foo\",\"_id\":\"bar\"}"; + try ( + XContentParser parser = JsonXContent.jsonXContent.createParser( + NamedXContentRegistry.EMPTY, + LoggingDeprecationHandler.INSTANCE, + json + ) + ) { + ensureExpectedToken(XContentParser.Token.START_OBJECT, parser.nextToken(), parser); + ParsingException parsingException = assertThrows(ParsingException.class, () -> GetResult.fromXContentEmbedded(parser)); + assertEquals("Missing required field [found]", parsingException.getMessage()); + } + + } + public static GetResult copyGetResult(GetResult getResult) { return new GetResult( getResult.getIndex(), diff --git a/server/src/test/java/org/opensearch/index/mapper/ObjectMapperTests.java b/server/src/test/java/org/opensearch/index/mapper/ObjectMapperTests.java index b10a7d8155056..504bc622ec12e 100644 --- a/server/src/test/java/org/opensearch/index/mapper/ObjectMapperTests.java +++ b/server/src/test/java/org/opensearch/index/mapper/ObjectMapperTests.java @@ -33,6 +33,8 @@ package org.opensearch.index.mapper; import org.opensearch.common.compress.CompressedXContent; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; import org.opensearch.common.xcontent.XContentFactory; import org.opensearch.core.common.bytes.BytesArray; import org.opensearch.core.xcontent.MediaTypeRegistry; @@ -46,6 +48,7 @@ import java.io.IOException; import java.util.Collection; +import static org.opensearch.common.util.FeatureFlags.STAR_TREE_INDEX; import static org.hamcrest.Matchers.containsString; public class ObjectMapperTests extends OpenSearchSingleNodeTestCase { @@ -487,6 +490,76 @@ public void testDerivedFields() throws Exception { assertEquals("date", mapper.typeName()); } + public void testCompositeFields() throws Exception { + String mapping = XContentFactory.jsonBuilder() + .startObject() + .startObject("tweet") + .startObject("composite") + .startObject("startree") + .field("type", "star_tree") + .startObject("config") + .startArray("ordered_dimensions") + .startObject() + .field("name", "@timestamp") + .endObject() + .startObject() + .field("name", "status") + .endObject() + .endArray() + .startArray("metrics") + .startObject() + .field("name", "status") + .endObject() + .startObject() + .field("name", "metric_field") + .endObject() + .endArray() + .endObject() + .endObject() + .endObject() + .startObject("properties") + .startObject("@timestamp") + .field("type", "date") + .endObject() + .startObject("status") + .field("type", "integer") + .endObject() + .startObject("metric_field") + .field("type", "integer") + .endObject() + .endObject() + .endObject() + .endObject() + .toString(); + + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> createIndex("invalid").mapperService().documentMapperParser().parse("tweet", new CompressedXContent(mapping)) + ); + assertEquals( + "star tree index is under an experimental feature and can be activated only by enabling opensearch.experimental.feature.composite_index.star_tree.enabled feature flag in the JVM options", + ex.getMessage() + ); + + final Settings starTreeEnabledSettings = Settings.builder().put(STAR_TREE_INDEX, "true").build(); + FeatureFlags.initializeFeatureFlags(starTreeEnabledSettings); + + DocumentMapper documentMapper = createIndex("test").mapperService() + .documentMapperParser() + .parse("tweet", new CompressedXContent(mapping)); + + Mapper mapper = documentMapper.root().getMapper("startree"); + assertTrue(mapper instanceof StarTreeMapper); + StarTreeMapper starTreeMapper = (StarTreeMapper) mapper; + assertEquals("star_tree", starTreeMapper.fieldType().typeName()); + // Check that field in properties was parsed correctly as well + mapper = documentMapper.root().getMapper("@timestamp"); + assertNotNull(mapper); + assertEquals("date", mapper.typeName()); + + FeatureFlags.initializeFeatureFlags(Settings.EMPTY); + } + @Override protected Collection> getPlugins() { return pluginList(InternalSettingsPlugin.class); diff --git a/server/src/test/java/org/opensearch/index/mapper/StarTreeMapperTests.java b/server/src/test/java/org/opensearch/index/mapper/StarTreeMapperTests.java new file mode 100644 index 0000000000000..3144b1b007924 --- /dev/null +++ b/server/src/test/java/org/opensearch/index/mapper/StarTreeMapperTests.java @@ -0,0 +1,767 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.index.mapper; + +import org.opensearch.common.CheckedConsumer; +import org.opensearch.common.Rounding; +import org.opensearch.common.settings.ClusterSettings; +import org.opensearch.common.settings.Settings; +import org.opensearch.common.util.FeatureFlags; +import org.opensearch.core.xcontent.XContentBuilder; +import org.opensearch.index.compositeindex.CompositeIndexSettings; +import org.opensearch.index.compositeindex.CompositeIndexValidator; +import org.opensearch.index.compositeindex.datacube.DateDimension; +import org.opensearch.index.compositeindex.datacube.Dimension; +import org.opensearch.index.compositeindex.datacube.Metric; +import org.opensearch.index.compositeindex.datacube.MetricStat; +import org.opensearch.index.compositeindex.datacube.NumericDimension; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeField; +import org.opensearch.index.compositeindex.datacube.startree.StarTreeFieldConfiguration; +import org.junit.After; +import org.junit.Before; + +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashSet; +import java.util.List; +import java.util.Set; + +import static org.hamcrest.Matchers.containsString; + +/** + * Tests for {@link StarTreeMapper}. + */ +public class StarTreeMapperTests extends MapperTestCase { + + @Before + public void setup() { + FeatureFlags.initializeFeatureFlags(Settings.builder().put(FeatureFlags.STAR_TREE_INDEX, true).build()); + } + + @After + public void teardown() { + FeatureFlags.initializeFeatureFlags(Settings.EMPTY); + } + + public void testValidStarTree() throws IOException { + MapperService mapperService = createMapperService(getExpandedMapping("status", "size")); + Set compositeFieldTypes = mapperService.getCompositeFieldTypes(); + for (CompositeMappedFieldType type : compositeFieldTypes) { + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) type; + assertEquals("@timestamp", starTreeFieldType.getDimensions().get(0).getField()); + assertTrue(starTreeFieldType.getDimensions().get(0) instanceof DateDimension); + DateDimension dateDim = (DateDimension) starTreeFieldType.getDimensions().get(0); + List expectedTimeUnits = Arrays.asList( + Rounding.DateTimeUnit.DAY_OF_MONTH, + Rounding.DateTimeUnit.MONTH_OF_YEAR + ); + assertEquals(expectedTimeUnits, dateDim.getIntervals()); + assertEquals("status", starTreeFieldType.getDimensions().get(1).getField()); + assertEquals("size", starTreeFieldType.getMetrics().get(0).getField()); + List expectedMetrics = Arrays.asList(MetricStat.SUM, MetricStat.AVG); + assertEquals(expectedMetrics, starTreeFieldType.getMetrics().get(0).getMetrics()); + assertEquals(100, starTreeFieldType.getStarTreeConfig().maxLeafDocs()); + assertEquals(StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP, starTreeFieldType.getStarTreeConfig().getBuildMode()); + assertEquals( + new HashSet<>(Arrays.asList("@timestamp", "status")), + starTreeFieldType.getStarTreeConfig().getSkipStarNodeCreationInDims() + ); + } + } + + public void testValidStarTreeDefaults() throws IOException { + MapperService mapperService = createMapperService(getMinMapping()); + Set compositeFieldTypes = mapperService.getCompositeFieldTypes(); + for (CompositeMappedFieldType type : compositeFieldTypes) { + StarTreeMapper.StarTreeFieldType starTreeFieldType = (StarTreeMapper.StarTreeFieldType) type; + assertEquals("@timestamp", starTreeFieldType.getDimensions().get(0).getField()); + assertTrue(starTreeFieldType.getDimensions().get(0) instanceof DateDimension); + DateDimension dateDim = (DateDimension) starTreeFieldType.getDimensions().get(0); + List expectedTimeUnits = Arrays.asList( + Rounding.DateTimeUnit.MINUTES_OF_HOUR, + Rounding.DateTimeUnit.HOUR_OF_DAY + ); + assertEquals(expectedTimeUnits, dateDim.getIntervals()); + assertEquals("status", starTreeFieldType.getDimensions().get(1).getField()); + assertEquals("status", starTreeFieldType.getMetrics().get(0).getField()); + List expectedMetrics = Arrays.asList( + MetricStat.AVG, + MetricStat.COUNT, + MetricStat.SUM, + MetricStat.MAX, + MetricStat.MIN + ); + assertEquals(expectedMetrics, starTreeFieldType.getMetrics().get(0).getMetrics()); + assertEquals(10000, starTreeFieldType.getStarTreeConfig().maxLeafDocs()); + assertEquals(StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP, starTreeFieldType.getStarTreeConfig().getBuildMode()); + assertEquals(Collections.emptySet(), starTreeFieldType.getStarTreeConfig().getSkipStarNodeCreationInDims()); + } + } + + public void testInvalidDim() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getExpandedMapping("invalid", "size")) + ); + assertEquals("Failed to parse mapping [_doc]: unknown dimension field [invalid]", ex.getMessage()); + } + + public void testInvalidMetric() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getExpandedMapping("status", "invalid")) + ); + assertEquals("Failed to parse mapping [_doc]: unknown metric field [invalid]", ex.getMessage()); + } + + public void testNoMetrics() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMapping(false, true, false, false)) + ); + assertThat( + ex.getMessage(), + containsString("Failed to parse mapping [_doc]: metrics section is required for star tree field [startree]") + ); + } + + public void testInvalidParam() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(false, false, false, false, true)) + ); + assertEquals( + "Failed to parse mapping [_doc]: Star tree mapping definition has unsupported parameters: [invalid : {invalid=invalid}]", + ex.getMessage() + ); + } + + public void testNoDims() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMapping(true, false, false, false)) + ); + assertThat( + ex.getMessage(), + containsString("Failed to parse mapping [_doc]: ordered_dimensions is required for star tree field [startree]") + ); + } + + public void testMissingDims() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMapping(false, false, true, false)) + ); + assertThat(ex.getMessage(), containsString("Failed to parse mapping [_doc]: unknown dimension field [@timestamp]")); + } + + public void testMissingMetrics() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMapping(false, false, false, true)) + ); + assertThat(ex.getMessage(), containsString("Failed to parse mapping [_doc]: unknown metric field [metric_field]")); + } + + public void testInvalidMetricType() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(false, false, false, true)) + ); + assertEquals( + "Failed to parse mapping [_doc]: non-numeric field type is associated with star tree metric [startree]", + ex.getMessage() + ); + } + + public void testInvalidDimType() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(false, false, true, false)) + ); + assertEquals( + "Failed to parse mapping [_doc]: unsupported field type associated with dimension [@timestamp] as part of star tree field [startree]", + ex.getMessage() + ); + } + + public void testInvalidSkipDim() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(false, true, false, false)) + ); + assertEquals( + "Failed to parse mapping [_doc]: [invalid] in skip_star_node_creation_for_dimensions should be part of ordered_dimensions", + ex.getMessage() + ); + } + + public void testInvalidSingleDim() { + MapperParsingException ex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getInvalidMapping(true, false, false, false)) + ); + assertEquals( + "Failed to parse mapping [_doc]: Atleast two dimensions are required to build star tree index field [startree]", + ex.getMessage() + ); + } + + public void testMetric() { + List m1 = new ArrayList<>(); + m1.add(MetricStat.MAX); + Metric metric1 = new Metric("name", m1); + Metric metric2 = new Metric("name", m1); + assertEquals(metric1, metric2); + List m2 = new ArrayList<>(); + m2.add(MetricStat.MAX); + m2.add(MetricStat.COUNT); + metric2 = new Metric("name", m2); + assertNotEquals(metric1, metric2); + + assertEquals(MetricStat.COUNT, MetricStat.fromTypeName("count")); + assertEquals(MetricStat.MAX, MetricStat.fromTypeName("max")); + assertEquals(MetricStat.MIN, MetricStat.fromTypeName("min")); + assertEquals(MetricStat.SUM, MetricStat.fromTypeName("sum")); + assertEquals(MetricStat.AVG, MetricStat.fromTypeName("avg")); + IllegalArgumentException ex = expectThrows(IllegalArgumentException.class, () -> MetricStat.fromTypeName("invalid")); + assertEquals("Invalid metric stat: invalid", ex.getMessage()); + } + + public void testDimensions() { + List d1CalendarIntervals = new ArrayList<>(); + d1CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + DateDimension d1 = new DateDimension("name", d1CalendarIntervals); + DateDimension d2 = new DateDimension("name", d1CalendarIntervals); + assertEquals(d1, d2); + d2 = new DateDimension("name1", d1CalendarIntervals); + assertNotEquals(d1, d2); + List d2CalendarIntervals = new ArrayList<>(); + d2CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + d2CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + d2 = new DateDimension("name", d2CalendarIntervals); + assertNotEquals(d1, d2); + NumericDimension n1 = new NumericDimension("name"); + NumericDimension n2 = new NumericDimension("name"); + assertEquals(n1, n2); + n2 = new NumericDimension("name1"); + assertNotEquals(n1, n2); + } + + public void testStarTreeField() { + List m1 = new ArrayList<>(); + m1.add(MetricStat.MAX); + Metric metric1 = new Metric("name", m1); + List d1CalendarIntervals = new ArrayList<>(); + d1CalendarIntervals.add(Rounding.DateTimeUnit.HOUR_OF_DAY); + DateDimension d1 = new DateDimension("name", d1CalendarIntervals); + NumericDimension n1 = new NumericDimension("numeric"); + NumericDimension n2 = new NumericDimension("name1"); + + List metrics = List.of(metric1); + List dims = List.of(d1, n2); + StarTreeFieldConfiguration config = new StarTreeFieldConfiguration( + 100, + Set.of("name"), + StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP + ); + + StarTreeField field1 = new StarTreeField("starTree", dims, metrics, config); + StarTreeField field2 = new StarTreeField("starTree", dims, metrics, config); + assertEquals(field1, field2); + + dims = List.of(d1, n2, n1); + field2 = new StarTreeField("starTree", dims, metrics, config); + assertNotEquals(field1, field2); + + dims = List.of(d1, n2); + metrics = List.of(metric1, metric1); + field2 = new StarTreeField("starTree", dims, metrics, config); + assertNotEquals(field1, field2); + + dims = List.of(d1, n2); + metrics = List.of(metric1); + StarTreeFieldConfiguration config1 = new StarTreeFieldConfiguration( + 1000, + Set.of("name"), + StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP + ); + field2 = new StarTreeField("starTree", dims, metrics, config1); + assertNotEquals(field1, field2); + + config1 = new StarTreeFieldConfiguration(100, Set.of("name", "field2"), StarTreeFieldConfiguration.StarTreeBuildMode.OFF_HEAP); + field2 = new StarTreeField("starTree", dims, metrics, config1); + assertNotEquals(field1, field2); + + config1 = new StarTreeFieldConfiguration(100, Set.of("name"), StarTreeFieldConfiguration.StarTreeBuildMode.ON_HEAP); + field2 = new StarTreeField("starTree", dims, metrics, config1); + assertNotEquals(field1, field2); + + field2 = new StarTreeField("starTree", dims, metrics, config); + assertEquals(field1, field2); + } + + public void testValidations() throws IOException { + MapperService mapperService = createMapperService(getExpandedMapping("status", "size")); + Settings settings = Settings.builder().put(CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING.getKey(), true).build(); + CompositeIndexSettings enabledCompositeIndexSettings = new CompositeIndexSettings( + settings, + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS) + ); + CompositeIndexValidator.validate(mapperService, enabledCompositeIndexSettings, mapperService.getIndexSettings()); + settings = Settings.builder().put(CompositeIndexSettings.STAR_TREE_INDEX_ENABLED_SETTING.getKey(), false).build(); + CompositeIndexSettings compositeIndexSettings = new CompositeIndexSettings( + settings, + new ClusterSettings(Settings.EMPTY, ClusterSettings.BUILT_IN_CLUSTER_SETTINGS) + ); + MapperService finalMapperService = mapperService; + IllegalArgumentException ex = expectThrows( + IllegalArgumentException.class, + () -> CompositeIndexValidator.validate(finalMapperService, compositeIndexSettings, finalMapperService.getIndexSettings()) + ); + assertEquals( + "star tree index cannot be created, enable it using [indices.composite_index.star_tree.enabled] setting", + ex.getMessage() + ); + + MapperService mapperServiceInvalid = createMapperService(getInvalidMappingWithDv(false, false, false, true)); + ex = expectThrows( + IllegalArgumentException.class, + () -> CompositeIndexValidator.validate( + mapperServiceInvalid, + enabledCompositeIndexSettings, + mapperServiceInvalid.getIndexSettings() + ) + ); + assertEquals( + "Aggregations not supported for the metrics field [metric_field] with field type [integer] as part of star tree field", + ex.getMessage() + ); + + MapperService mapperServiceInvalidDim = createMapperService(getInvalidMappingWithDv(false, false, true, false)); + ex = expectThrows( + IllegalArgumentException.class, + () -> CompositeIndexValidator.validate( + mapperServiceInvalidDim, + enabledCompositeIndexSettings, + mapperServiceInvalidDim.getIndexSettings() + ) + ); + assertEquals( + "Aggregations not supported for the dimension field [@timestamp] with field type [date] as part of star tree field", + ex.getMessage() + ); + + MapperParsingException mapperParsingExceptionex = expectThrows( + MapperParsingException.class, + () -> createMapperService(getMinMappingWith2StarTrees()) + ); + assertEquals( + "Failed to parse mapping [_doc]: Composite fields cannot have more than [1] fields", + mapperParsingExceptionex.getMessage() + ); + } + + private XContentBuilder getExpandedMapping(String dim, String metric) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + b.field("max_leaf_docs", 100); + b.startArray("skip_star_node_creation_for_dimensions"); + { + b.value("@timestamp"); + b.value("status"); + } + b.endArray(); + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "@timestamp"); + b.startArray("calendar_intervals"); + b.value("day"); + b.value("month"); + b.endArray(); + b.endObject(); + b.startObject(); + b.field("name", dim); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", metric); + b.startArray("stats"); + b.value("sum"); + b.value("avg"); + b.endArray(); + b.endObject(); + b.endArray(); + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("@timestamp"); + b.field("type", "date"); + b.endObject(); + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.startObject("size"); + b.field("type", "integer"); + b.endObject(); + b.endObject(); + }); + } + + private XContentBuilder getMinMapping() throws IOException { + return getMinMapping(false, false, false, false); + } + + private XContentBuilder getMinMapping(boolean isEmptyDims, boolean isEmptyMetrics, boolean missingDim, boolean missingMetric) + throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + if (!isEmptyDims) { + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + } + if (!isEmptyMetrics) { + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + } + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + if (!missingDim) { + b.startObject("@timestamp"); + b.field("type", "date"); + b.endObject(); + } + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + if (!missingMetric) { + b.startObject("metric_field"); + b.field("type", "integer"); + b.endObject(); + } + b.endObject(); + }); + } + + private XContentBuilder getMinMappingWith2StarTrees() throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + + b.endObject(); + b.endObject(); + + b.startObject("startree1"); + b.field("type", "star_tree"); + b.startObject("config"); + + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("@timestamp"); + b.field("type", "date"); + b.endObject(); + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.startObject("metric_field"); + b.field("type", "integer"); + b.endObject(); + + b.endObject(); + }); + } + + private XContentBuilder getInvalidMapping( + boolean singleDim, + boolean invalidSkipDims, + boolean invalidDimType, + boolean invalidMetricType, + boolean invalidParam + ) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + + b.startArray("skip_star_node_creation_for_dimensions"); + { + if (invalidSkipDims) { + b.value("invalid"); + } + b.value("status"); + } + b.endArray(); + if (invalidParam) { + b.startObject("invalid"); + b.field("invalid", "invalid"); + b.endObject(); + } + b.startArray("ordered_dimensions"); + if (!singleDim) { + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + } + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("@timestamp"); + if (!invalidDimType) { + b.field("type", "date"); + } else { + b.field("type", "keyword"); + } + b.endObject(); + + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.startObject("metric_field"); + if (invalidMetricType) { + b.field("type", "date"); + } else { + b.field("type", "integer"); + } + b.endObject(); + b.endObject(); + }); + } + + private XContentBuilder getInvalidMappingWithDv( + boolean singleDim, + boolean invalidSkipDims, + boolean invalidDimType, + boolean invalidMetricType + ) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + b.field("type", "star_tree"); + b.startObject("config"); + + b.startArray("skip_star_node_creation_for_dimensions"); + { + if (invalidSkipDims) { + b.value("invalid"); + } + b.value("status"); + } + b.endArray(); + b.startArray("ordered_dimensions"); + if (!singleDim) { + b.startObject(); + b.field("name", "@timestamp"); + b.endObject(); + } + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.startObject(); + b.field("name", "metric_field"); + b.endObject(); + b.endArray(); + b.endObject(); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("@timestamp"); + if (!invalidDimType) { + b.field("type", "date"); + b.field("doc_values", "true"); + } else { + b.field("type", "date"); + b.field("doc_values", "false"); + } + b.endObject(); + + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.startObject("metric_field"); + if (invalidMetricType) { + b.field("type", "integer"); + b.field("doc_values", "false"); + } else { + b.field("type", "integer"); + b.field("doc_values", "true"); + } + b.endObject(); + b.endObject(); + }); + } + + private XContentBuilder getInvalidMapping(boolean singleDim, boolean invalidSkipDims, boolean invalidDimType, boolean invalidMetricType) + throws IOException { + return getInvalidMapping(singleDim, invalidSkipDims, invalidDimType, invalidMetricType, false); + } + + protected boolean supportsOrIgnoresBoost() { + return false; + } + + protected boolean supportsMeta() { + return false; + } + + @Override + protected void assertExistsQuery(MapperService mapperService) {} + + // Overriding fieldMapping to make it create composite mappings by default. + // This way, the parent tests are checking the right behavior for this Mapper. + @Override + protected final XContentBuilder fieldMapping(CheckedConsumer buildField) throws IOException { + return topMapping(b -> { + b.startObject("composite"); + b.startObject("startree"); + buildField.accept(b); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("size"); + b.field("type", "integer"); + b.endObject(); + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.endObject(); + }); + } + + @Override + public void testEmptyName() { + MapperParsingException e = expectThrows(MapperParsingException.class, () -> createMapperService(topMapping(b -> { + b.startObject("composite"); + b.startObject(""); + minimalMapping(b); + b.endObject(); + b.endObject(); + b.startObject("properties"); + b.startObject("size"); + b.field("type", "integer"); + b.endObject(); + b.startObject("status"); + b.field("type", "integer"); + b.endObject(); + b.endObject(); + }))); + assertThat(e.getMessage(), containsString("name cannot be empty string")); + assertParseMinimalWarnings(); + } + + @Override + protected void minimalMapping(XContentBuilder b) throws IOException { + b.field("type", "star_tree"); + b.startObject("config"); + b.startArray("ordered_dimensions"); + b.startObject(); + b.field("name", "size"); + b.endObject(); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + b.startArray("metrics"); + b.startObject(); + b.field("name", "status"); + b.endObject(); + b.endArray(); + b.endObject(); + } + + @Override + protected void writeFieldValue(XContentBuilder builder) throws IOException {} + + @Override + protected void registerParameters(ParameterChecker checker) throws IOException { + + } +} diff --git a/server/src/test/java/org/opensearch/ingest/AbstractBatchingProcessorTests.java b/server/src/test/java/org/opensearch/ingest/AbstractBatchingProcessorTests.java new file mode 100644 index 0000000000000..54fc30cb5befa --- /dev/null +++ b/server/src/test/java/org/opensearch/ingest/AbstractBatchingProcessorTests.java @@ -0,0 +1,160 @@ +/* + * SPDX-License-Identifier: Apache-2.0 + * + * The OpenSearch Contributors require contributions made to + * this file be licensed under the Apache-2.0 license or a + * compatible open source license. + */ + +package org.opensearch.ingest; + +import org.opensearch.OpenSearchParseException; +import org.opensearch.test.OpenSearchTestCase; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.function.Consumer; + +public class AbstractBatchingProcessorTests extends OpenSearchTestCase { + + public void testBatchExecute_emptyInput() { + DummyProcessor processor = new DummyProcessor(3); + Consumer> handler = (results) -> assertTrue(results.isEmpty()); + processor.batchExecute(Collections.emptyList(), handler); + assertTrue(processor.getSubBatches().isEmpty()); + } + + public void testBatchExecute_singleBatchSize() { + DummyProcessor processor = new DummyProcessor(3); + List wrapperList = Arrays.asList( + IngestDocumentPreparer.createIngestDocumentWrapper(1), + IngestDocumentPreparer.createIngestDocumentWrapper(2), + IngestDocumentPreparer.createIngestDocumentWrapper(3) + ); + List resultList = new ArrayList<>(); + processor.batchExecute(wrapperList, resultList::addAll); + assertEquals(wrapperList, resultList); + assertEquals(1, processor.getSubBatches().size()); + assertEquals(wrapperList, processor.getSubBatches().get(0)); + } + + public void testBatchExecute_multipleBatches() { + DummyProcessor processor = new DummyProcessor(2); + List wrapperList = Arrays.asList( + IngestDocumentPreparer.createIngestDocumentWrapper(1), + IngestDocumentPreparer.createIngestDocumentWrapper(2), + IngestDocumentPreparer.createIngestDocumentWrapper(3), + IngestDocumentPreparer.createIngestDocumentWrapper(4), + IngestDocumentPreparer.createIngestDocumentWrapper(5) + ); + List resultList = new ArrayList<>(); + processor.batchExecute(wrapperList, resultList::addAll); + assertEquals(wrapperList, resultList); + assertEquals(3, processor.getSubBatches().size()); + assertEquals(wrapperList.subList(0, 2), processor.getSubBatches().get(0)); + assertEquals(wrapperList.subList(2, 4), processor.getSubBatches().get(1)); + assertEquals(wrapperList.subList(4, 5), processor.getSubBatches().get(2)); + } + + public void testBatchExecute_randomBatches() { + int batchSize = randomIntBetween(2, 32); + int docCount = randomIntBetween(2, 32); + DummyProcessor processor = new DummyProcessor(batchSize); + List wrapperList = new ArrayList<>(); + for (int i = 0; i < docCount; ++i) { + wrapperList.add(IngestDocumentPreparer.createIngestDocumentWrapper(i)); + } + List resultList = new ArrayList<>(); + processor.batchExecute(wrapperList, resultList::addAll); + assertEquals(wrapperList, resultList); + assertEquals(docCount / batchSize + (docCount % batchSize == 0 ? 0 : 1), processor.getSubBatches().size()); + } + + public void testBatchExecute_defaultBatchSize() { + DummyProcessor processor = new DummyProcessor(1); + List wrapperList = Arrays.asList( + IngestDocumentPreparer.createIngestDocumentWrapper(1), + IngestDocumentPreparer.createIngestDocumentWrapper(2), + IngestDocumentPreparer.createIngestDocumentWrapper(3) + ); + List resultList = new ArrayList<>(); + processor.batchExecute(wrapperList, resultList::addAll); + assertEquals(wrapperList, resultList); + assertEquals(3, processor.getSubBatches().size()); + assertEquals(wrapperList.subList(0, 1), processor.getSubBatches().get(0)); + assertEquals(wrapperList.subList(1, 2), processor.getSubBatches().get(1)); + assertEquals(wrapperList.subList(2, 3), processor.getSubBatches().get(2)); + } + + public void testFactory_invalidBatchSize() { + Map config = new HashMap<>(); + config.put("batch_size", 0); + DummyProcessor.DummyProcessorFactory factory = new DummyProcessor.DummyProcessorFactory("DummyProcessor"); + OpenSearchParseException exception = assertThrows(OpenSearchParseException.class, () -> factory.create(config)); + assertEquals("[batch_size] batch size must be a positive integer", exception.getMessage()); + } + + public void testFactory_defaultBatchSize() throws Exception { + Map config = new HashMap<>(); + DummyProcessor.DummyProcessorFactory factory = new DummyProcessor.DummyProcessorFactory("DummyProcessor"); + DummyProcessor processor = (DummyProcessor) factory.create(config); + assertEquals(1, processor.batchSize); + } + + public void testFactory_callNewProcessor() throws Exception { + Map config = new HashMap<>(); + config.put("batch_size", 3); + DummyProcessor.DummyProcessorFactory factory = new DummyProcessor.DummyProcessorFactory("DummyProcessor"); + DummyProcessor processor = (DummyProcessor) factory.create(config); + assertEquals(3, processor.batchSize); + } + + static class DummyProcessor extends AbstractBatchingProcessor { + private List> subBatches = new ArrayList<>(); + + public List> getSubBatches() { + return subBatches; + } + + protected DummyProcessor(int batchSize) { + super("tag", "description", batchSize); + } + + @Override + public void subBatchExecute(List ingestDocumentWrappers, Consumer> handler) { + subBatches.add(ingestDocumentWrappers); + handler.accept(ingestDocumentWrappers); + } + + @Override + public IngestDocument execute(IngestDocument ingestDocument) throws Exception { + return ingestDocument; + } + + @Override + public String getType() { + return null; + } + + public static class DummyProcessorFactory extends Factory { + + protected DummyProcessorFactory(String processorType) { + super(processorType); + } + + public AbstractBatchingProcessor create(Map config) throws Exception { + final Map processorFactories = new HashMap<>(); + return super.create(processorFactories, "tag", "description", config); + } + + @Override + protected AbstractBatchingProcessor newProcessor(String tag, String description, int batchSize, Map config) { + return new DummyProcessor(batchSize); + } + } + } +} diff --git a/server/src/test/java/org/opensearch/node/NodeTests.java b/server/src/test/java/org/opensearch/node/NodeTests.java index f44cc352cd330..0093091f61a1c 100644 --- a/server/src/test/java/org/opensearch/node/NodeTests.java +++ b/server/src/test/java/org/opensearch/node/NodeTests.java @@ -380,7 +380,7 @@ public void testCreateWithFileCache() throws Exception { List> plugins = basePlugins(); ByteSizeValue cacheSize = new ByteSizeValue(16, ByteSizeUnit.GB); Settings searchRoleSettingsWithConfig = baseSettings().put(searchRoleSettings) - .put(Node.NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), cacheSize) + .put(Node.NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), cacheSize.toString()) .build(); Settings onlySearchRoleSettings = Settings.builder() .put(searchRoleSettingsWithConfig) diff --git a/test/framework/src/main/java/org/opensearch/index/mapper/MapperTestCase.java b/test/framework/src/main/java/org/opensearch/index/mapper/MapperTestCase.java index dc5954907a4fa..01a4005255f29 100644 --- a/test/framework/src/main/java/org/opensearch/index/mapper/MapperTestCase.java +++ b/test/framework/src/main/java/org/opensearch/index/mapper/MapperTestCase.java @@ -174,7 +174,7 @@ protected static void assertNoDocValuesField(ParseContext.Document doc, String f } } - public final void testEmptyName() { + public void testEmptyName() { MapperParsingException e = expectThrows(MapperParsingException.class, () -> createMapperService(mapping(b -> { b.startObject(""); minimalMapping(b); diff --git a/test/framework/src/main/java/org/opensearch/search/aggregations/AggregatorTestCase.java b/test/framework/src/main/java/org/opensearch/search/aggregations/AggregatorTestCase.java index 28323a94af721..544fb100a17bf 100644 --- a/test/framework/src/main/java/org/opensearch/search/aggregations/AggregatorTestCase.java +++ b/test/framework/src/main/java/org/opensearch/search/aggregations/AggregatorTestCase.java @@ -115,6 +115,7 @@ import org.opensearch.index.mapper.ObjectMapper.Nested; import org.opensearch.index.mapper.RangeFieldMapper; import org.opensearch.index.mapper.RangeType; +import org.opensearch.index.mapper.StarTreeMapper; import org.opensearch.index.mapper.TextFieldMapper; import org.opensearch.index.query.QueryShardContext; import org.opensearch.index.shard.IndexShard; @@ -201,6 +202,7 @@ public abstract class AggregatorTestCase extends OpenSearchTestCase { denylist.add(CompletionFieldMapper.CONTENT_TYPE); // TODO support completion denylist.add(FieldAliasMapper.CONTENT_TYPE); // TODO support alias denylist.add(DerivedFieldMapper.CONTENT_TYPE); // TODO support derived fields + denylist.add(StarTreeMapper.CONTENT_TYPE); // TODO evaluate support for star tree fields TYPE_TEST_DENYLIST = denylist; } diff --git a/test/framework/src/main/java/org/opensearch/test/InternalTestCluster.java b/test/framework/src/main/java/org/opensearch/test/InternalTestCluster.java index ca80c65e58522..ec88002317284 100644 --- a/test/framework/src/main/java/org/opensearch/test/InternalTestCluster.java +++ b/test/framework/src/main/java/org/opensearch/test/InternalTestCluster.java @@ -165,6 +165,7 @@ import static org.opensearch.test.NodeRoles.onlyRoles; import static org.opensearch.test.NodeRoles.removeRoles; import static org.opensearch.test.OpenSearchTestCase.assertBusy; +import static org.opensearch.test.OpenSearchTestCase.randomBoolean; import static org.opensearch.test.OpenSearchTestCase.randomFrom; import static org.hamcrest.Matchers.equalTo; import static org.hamcrest.Matchers.greaterThan; @@ -216,7 +217,8 @@ public final class InternalTestCluster extends TestCluster { nodeAndClient.node.settings() ); - private static final ByteSizeValue DEFAULT_SEARCH_CACHE_SIZE = new ByteSizeValue(2, ByteSizeUnit.GB); + private static final String DEFAULT_SEARCH_CACHE_SIZE_BYTES = "2gb"; + private static final String DEFAULT_SEARCH_CACHE_SIZE_PERCENT = "5%"; public static final int DEFAULT_LOW_NUM_CLUSTER_MANAGER_NODES = 1; public static final int DEFAULT_HIGH_NUM_CLUSTER_MANAGER_NODES = 3; @@ -700,8 +702,10 @@ public synchronized void ensureAtLeastNumSearchAndDataNodes(int n) { logger.info("increasing cluster size from {} to {}", size, n); Set searchAndDataRoles = Set.of(DiscoveryNodeRole.DATA_ROLE, DiscoveryNodeRole.SEARCH_ROLE); Settings settings = Settings.builder() - .put(Settings.EMPTY) - .put(Node.NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), DEFAULT_SEARCH_CACHE_SIZE) + .put( + Node.NODE_SEARCH_CACHE_SIZE_SETTING.getKey(), + randomBoolean() ? DEFAULT_SEARCH_CACHE_SIZE_PERCENT : DEFAULT_SEARCH_CACHE_SIZE_BYTES + ) .build(); startNodes(n - size, Settings.builder().put(onlyRoles(settings, searchAndDataRoles)).build()); validateClusterFormed(); diff --git a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java index 71ab56c98312a..ca5ddf21710af 100644 --- a/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java +++ b/test/framework/src/main/java/org/opensearch/test/OpenSearchIntegTestCase.java @@ -864,6 +864,10 @@ public ClusterHealthStatus ensureGreen(TimeValue timeout, String... indices) { return ensureColor(ClusterHealthStatus.GREEN, timeout, false, indices); } + public ClusterHealthStatus ensureGreen(TimeValue timeout, boolean waitForNoRelocatingShards, String... indices) { + return ensureColor(ClusterHealthStatus.GREEN, timeout, waitForNoRelocatingShards, false, indices); + } + /** * Ensures the cluster has a yellow state via the cluster health API. */ @@ -891,6 +895,16 @@ private ClusterHealthStatus ensureColor( TimeValue timeout, boolean waitForNoInitializingShards, String... indices + ) { + return ensureColor(clusterHealthStatus, timeout, true, waitForNoInitializingShards, indices); + } + + private ClusterHealthStatus ensureColor( + ClusterHealthStatus clusterHealthStatus, + TimeValue timeout, + boolean waitForNoRelocatingShards, + boolean waitForNoInitializingShards, + String... indices ) { String color = clusterHealthStatus.name().toLowerCase(Locale.ROOT); String method = "ensure" + Strings.capitalize(color); @@ -899,7 +913,7 @@ private ClusterHealthStatus ensureColor( .timeout(timeout) .waitForStatus(clusterHealthStatus) .waitForEvents(Priority.LANGUID) - .waitForNoRelocatingShards(true) + .waitForNoRelocatingShards(waitForNoRelocatingShards) .waitForNoInitializingShards(waitForNoInitializingShards) // We currently often use ensureGreen or ensureYellow to check whether the cluster is back in a good state after shutting down // a node. If the node that is stopped is the cluster-manager node, another node will become cluster-manager and publish a