Skip to content

Commit 2f756b0

Browse files
committed
Merge branch 'master' into ccr
* master: Remove reference to non-existent store type (#32418) [TEST] Mute failing FlushIT test Fix ordering of bootstrap checks in docs (#32417) [TEST] Mute failing InternalEngineTests#testSeqNoAndCheckpoints [TEST] Mute failing testConvertLongHexError bump lucene version after backport Upgrade to Lucene-7.5.0-snapshot-608f0277b0 (#32390) [Kerberos] Avoid vagrant update on precommit (#32416) TESTS: Move netty leak detection to paranoid level (#32354) [DOCS] Fixes formatting of scope object in job resource Copy missing segment attributes in getSegmentInfo (#32396) AbstractQueryTestCase should run without type less often (#28936) INGEST: Fix Deprecation Warning in Script Proc. (#32407) Switch x-pack/plugin to new style Requests (#32327) Docs: Correcting a typo in tophits (#32359) Build: Stop double generating buildSrc pom (#32408) TEST: Avoid triggering merges in FlushIT Fix missing JavaDoc for @throws in several places in KerberosTicketValidator. Switch x-pack full restart to new style Requests (#32294) Release requests in cors handler (#32364) Painless: Clean Up PainlessClass Variables (#32380) Docs: Fix callouts in put license HL REST docs (#32363) [ML] Consistent pattern for strict/lenient parser names (#32399) Update update-settings.asciidoc (#31378) Remove some dead code (#31993) Introduce index store plugins (#32375) Rank-Eval: Reduce scope of an unchecked supression Make sure _forcemerge respects `max_num_segments`. (#32291) TESTS: Fix Buf Leaks in HttpReadWriteHandlerTests (#32377) Only enforce password hashing check if FIPS enabled (#32383)
2 parents 8474f8a + 588db62 commit 2f756b0

File tree

172 files changed

+1487
-1391
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

172 files changed

+1487
-1391
lines changed

buildSrc/build.gradle

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -183,4 +183,12 @@ if (project != rootProject) {
183183
testClass = 'org.elasticsearch.gradle.test.GradleUnitTestCase'
184184
integTestClass = 'org.elasticsearch.gradle.test.GradleIntegrationTestCase'
185185
}
186+
187+
/*
188+
* We alread configure publication and we don't need or want this one that
189+
* comes from the java-gradle-plugin.
190+
*/
191+
afterEvaluate {
192+
generatePomFileForPluginMavenPublication.enabled = false
193+
}
186194
}

buildSrc/version.properties

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
elasticsearch = 7.0.0-alpha1
2-
lucene = 7.5.0-snapshot-b9e064b935
2+
lucene = 7.5.0-snapshot-608f0277b0
33

44
# optional dependencies
55
spatial4j = 0.7

docs/java-rest/high-level/licensing/put-license.asciidoc

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -33,10 +33,9 @@ include-tagged::{doc-tests}/LicensingDocumentationIT.java[put-license-response]
3333
--------------------------------------------------
3434
<1> The status of the license
3535
<2> Make sure that the license is valid.
36-
<3> Check the acknowledge flag.
37-
<4> It should be true if license is acknowledge.
38-
<5> Otherwise we can see the acknowledge messages in `acknowledgeHeader()` and check
39-
component-specific messages in `acknowledgeMessages()`.
36+
<3> Check the acknowledge flag. It should be true if license is acknowledged.
37+
<4> Otherwise we can see the acknowledge messages in `acknowledgeHeader()`
38+
<5> and check component-specific messages in `acknowledgeMessages()`.
4039

4140
[[java-rest-high-put-license-async]]
4241
==== Asynchronous Execution

docs/reference/aggregations/metrics/tophits-aggregation.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ In the example below we search across crawled webpages. For each webpage we stor
172172
belong to. By defining a `terms` aggregator on the `domain` field we group the result set of webpages by domain. The
173173
`top_hits` aggregator is then defined as sub-aggregator, so that the top matching hits are collected per bucket.
174174

175-
Also a `max` aggregator is defined which is used by the `terms` aggregator's order feature the return the buckets by
175+
Also a `max` aggregator is defined which is used by the `terms` aggregator's order feature to return the buckets by
176176
relevancy order of the most relevant document in a bucket.
177177

178178
[source,js]
Lines changed: 34 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,18 @@
11
[[cluster-update-settings]]
22
== Cluster Update Settings
33

4-
Allows to update cluster wide specific settings. Settings updated can
5-
either be persistent (applied across restarts) or transient (will not
6-
survive a full cluster restart). Here is an example:
4+
Use this API to review and change cluster-wide settings.
5+
6+
To review cluster settings:
7+
8+
[source,js]
9+
--------------------------------------------------
10+
GET /_cluster/settings
11+
--------------------------------------------------
12+
// CONSOLE
13+
14+
Updates to settings can be persistent, meaning they apply across restarts, or transient, where they don't
15+
survive a full cluster restart. Here is an example of a persistent update:
716

817
[source,js]
918
--------------------------------------------------
@@ -16,7 +25,7 @@ PUT /_cluster/settings
1625
--------------------------------------------------
1726
// CONSOLE
1827

19-
Or:
28+
This update is transient:
2029

2130
[source,js]
2231
--------------------------------------------------
@@ -29,8 +38,7 @@ PUT /_cluster/settings?flat_settings=true
2938
--------------------------------------------------
3039
// CONSOLE
3140

32-
The cluster responds with the settings updated. So the response for the
33-
last example will be:
41+
The response to an update returns the changed setting, as in this response to the transient example:
3442

3543
[source,js]
3644
--------------------------------------------------
@@ -44,11 +52,14 @@ last example will be:
4452
--------------------------------------------------
4553
// TESTRESPONSE[s/\.\.\./"acknowledged": true,/]
4654

47-
Resetting persistent or transient settings can be done by assigning a
48-
`null` value. If a transient setting is reset, the persistent setting
49-
is applied if available. Otherwise Elasticsearch will fallback to the setting
50-
defined at the configuration file or, if not existent, to the default
51-
value. Here is an example:
55+
You can reset persistent or transient settings by assigning a
56+
`null` value. If a transient setting is reset, the first one of these values that is defined is applied:
57+
58+
* the persistent setting
59+
* the setting in the configuration file
60+
* the default value.
61+
62+
This example resets a setting:
5263

5364
[source,js]
5465
--------------------------------------------------
@@ -61,8 +72,7 @@ PUT /_cluster/settings
6172
--------------------------------------------------
6273
// CONSOLE
6374

64-
Reset settings will not be included in the cluster response. So
65-
the response for the last example will be:
75+
The response does not include settings that have been reset:
6676

6777
[source,js]
6878
--------------------------------------------------
@@ -74,8 +84,8 @@ the response for the last example will be:
7484
--------------------------------------------------
7585
// TESTRESPONSE[s/\.\.\./"acknowledged": true,/]
7686

77-
Settings can also be reset using simple wildcards. For instance to reset
78-
all dynamic `indices.recovery` setting a prefix can be used:
87+
You can also reset settings using wildcards. For example, to reset
88+
all dynamic `indices.recovery` settings:
7989

8090
[source,js]
8191
--------------------------------------------------
@@ -88,25 +98,19 @@ PUT /_cluster/settings
8898
--------------------------------------------------
8999
// CONSOLE
90100

91-
Cluster wide settings can be returned using:
92-
93-
[source,js]
94-
--------------------------------------------------
95-
GET /_cluster/settings
96-
--------------------------------------------------
97-
// CONSOLE
98101

99102
[float]
100-
=== Precedence of settings
103+
=== Order of Precedence
104+
105+
The order of precedence for cluster settings is:
101106

102-
Transient cluster settings take precedence over persistent cluster settings,
103-
which take precedence over settings configured in the `elasticsearch.yml`
104-
config file.
107+
1. transient cluster settings
108+
2. persistent cluster settings
109+
3. settings in the `elasticsearch.yml` configuration file.
105110

106-
For this reason it is preferrable to use the `elasticsearch.yml` file only
107-
for local configurations, and set all cluster-wider settings with the
111+
It's best to use the `elasticsearch.yml` file only
112+
for local configurations, and set all cluster-wide settings with the
108113
`settings` API.
109114

110-
A list of dynamically updatable settings can be found in the
111-
<<modules,Modules>> documentation.
115+
You can find the list of settings that you can dynamically update in <<modules,Modules>>.
112116

docs/reference/index-modules/store.asciidoc

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -67,11 +67,6 @@ process equal to the size of the file being mapped. Before using this
6767
class, be sure you have allowed plenty of
6868
<<vm-max-map-count,virtual address space>>.
6969

70-
[[default_fs]]`default_fs` deprecated[5.0.0, The `default_fs` store type is deprecated - use `fs` instead]::
71-
72-
The `default` type is deprecated and is aliased to `fs` for backward
73-
compatibility.
74-
7570
=== Pre-loading data into the file system cache
7671

7772
NOTE: This is an expert setting, the details of which may change in the future.

docs/reference/setup/bootstrap-checks.asciidoc

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,19 @@ least 4096 threads. This can be done via `/etc/security/limits.conf`
118118
using the `nproc` setting (note that you might have to increase the
119119
limits for the `root` user too).
120120

121+
=== Max file size check
122+
123+
The segment files that are the components of individual shards and the translog
124+
generations that are components of the translog can get large (exceeding
125+
multiple gigabytes). On systems where the max size of files that can be created
126+
by the Elasticsearch process is limited, this can lead to failed
127+
writes. Therefore, the safest option here is that the max file size is unlimited
128+
and that is what the max file size bootstrap check enforces. To pass the max
129+
file check, you must configure your system to allow the Elasticsearch process
130+
the ability to write files of unlimited size. This can be done via
131+
`/etc/security/limits.conf` using the `fsize` setting to `unlimited` (note that
132+
you might have to increase the limits for the `root` user too).
133+
121134
[[max-size-virtual-memory-check]]
122135
=== Maximum size virtual memory check
123136

@@ -133,19 +146,6 @@ address space. This can be done via `/etc/security/limits.conf` using
133146
the `as` setting to `unlimited` (note that you might have to increase
134147
the limits for the `root` user too).
135148

136-
=== Max file size check
137-
138-
The segment files that are the components of individual shards and the translog
139-
generations that are components of the translog can get large (exceeding
140-
multiple gigabytes). On systems where the max size of files that can be created
141-
by the Elasticsearch process is limited, this can lead to failed
142-
writes. Therefore, the safest option here is that the max file size is unlimited
143-
and that is what the max file size bootstrap check enforces. To pass the max
144-
file check, you must configure your system to allow the Elasticsearch process
145-
the ability to write files of unlimited size. This can be done via
146-
`/etc/security/limits.conf` using the `fsize` setting to `unlimited` (note that
147-
you might have to increase the limits for the `root` user too).
148-
149149
=== Maximum map count check
150150

151151
Continuing from the previous <<max-size-virtual-memory-check,point>>, to

docs/reference/setup/sysconfig/virtual-memory.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[vm-max-map-count]]
22
=== Virtual memory
33

4-
Elasticsearch uses a <<default_fs,`mmapfs`>> directory by
4+
Elasticsearch uses a <<mmapfs,`mmapfs`>> directory by
55
default to store its indices. The default operating system limits on mmap
66
counts is likely to be too low, which may result in out of memory exceptions.
77

modules/aggs-matrix-stats/src/main/java/org/elasticsearch/search/aggregations/support/ArrayValuesSourceParser.java

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,6 @@ private ArrayValuesSourceParser(boolean formattable, ValuesSourceType valuesSour
7878
throws IOException {
7979

8080
List<String> fields = null;
81-
ValueType valueType = null;
8281
String format = null;
8382
Map<String, Object> missingMap = null;
8483
Map<ParseField, Object> otherOptions = new HashMap<>();
@@ -145,9 +144,6 @@ private ArrayValuesSourceParser(boolean formattable, ValuesSourceType valuesSour
145144
if (fields != null) {
146145
factory.fields(fields);
147146
}
148-
if (valueType != null) {
149-
factory.valueType(valueType);
150-
}
151147
if (format != null) {
152148
factory.format(format);
153149
}

modules/ingest-common/src/test/java/org/elasticsearch/ingest/common/ConvertProcessorTests.java

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -19,24 +19,24 @@
1919

2020
package org.elasticsearch.ingest.common;
2121

22-
import org.elasticsearch.ingest.IngestDocument;
23-
import org.elasticsearch.ingest.Processor;
24-
import org.elasticsearch.ingest.RandomDocumentPicks;
25-
import org.elasticsearch.test.ESTestCase;
26-
2722
import java.util.ArrayList;
2823
import java.util.Collections;
2924
import java.util.HashMap;
3025
import java.util.List;
3126
import java.util.Locale;
3227
import java.util.Map;
3328

29+
import org.elasticsearch.ingest.IngestDocument;
30+
import org.elasticsearch.ingest.Processor;
31+
import org.elasticsearch.ingest.RandomDocumentPicks;
32+
import org.elasticsearch.test.ESTestCase;
33+
3434
import static org.elasticsearch.ingest.IngestDocumentMatcher.assertIngestDocument;
3535
import static org.elasticsearch.ingest.common.ConvertProcessor.Type;
3636
import static org.hamcrest.Matchers.containsString;
3737
import static org.hamcrest.Matchers.equalTo;
38-
import static org.hamcrest.Matchers.sameInstance;
3938
import static org.hamcrest.Matchers.not;
39+
import static org.hamcrest.Matchers.sameInstance;
4040

4141
public class ConvertProcessorTests extends ESTestCase {
4242

@@ -138,6 +138,7 @@ public void testConvertLongLeadingZero() throws Exception {
138138
assertThat(ingestDocument.getFieldValue(fieldName, Long.class), equalTo(10L));
139139
}
140140

141+
@AwaitsFix( bugUrl = "https://github.com/elastic/elasticsearch/issues/32370")
141142
public void testConvertLongHexError() {
142143
IngestDocument ingestDocument = RandomDocumentPicks.randomIngestDocument(random());
143144
String value = "0x" + randomAlphaOfLengthBetween(1, 10);

0 commit comments

Comments
 (0)