Skip to content

Commit 5738345

Browse files
committed
Address review feedback and spotless complaint
1 parent 5baebb8 commit 5738345

File tree

5 files changed

+23
-18
lines changed

5 files changed

+23
-18
lines changed

RELEASENOTES.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ After this feature, there are two implementations of StoreFileTrackers. The firs
191191

192192
This feature is notable in that it better enables HBase to function on storage systems which do not provide the typical posix filesystem semantics, most importantly, those which do not implement a file rename operation which is atomic. Storage systems which do not implement atomic renames often implement a rename as a copy and delete operation which amplifies the I/O costs by 2x.
193193

194-
At scale, this feature should have a 2x reduction in I/O costs when using storage systems that do not provide atomic renames, most importantly in HBase compactions and memstore flushes. See the corresponding section, "Store File Tracking", in the HBase book for more information on how to use this feature.
194+
At scale, this feature should have a 2x reduction in I/O costs when using storage systems that do not provide atomic renames, most importantly in HBase compactions and memstore flushes. See the corresponding section, "Store File Tracking", in the HBase book for more information on how to use this feature.
195195

196196
The file based StoreFileTracker, FileBasedStoreFileTracker, is currently incompatible with the Medium Objects (MOB) feature. Do not enable them together.
197197

@@ -359,12 +359,12 @@ If not present, master local region will use the cluster level store file tracke
359359

360360
Introduced two shell commands for change table's or family's sft:
361361

362-
change\_sft:
362+
change\_sft:
363363
Change table's or table column family's sft. Examples:
364364
hbase\> change\_sft 't1','FILE'
365365
hbase\> change\_sft 't2','cf1','FILE'
366366

367-
change\_sft\_all:
367+
change\_sft\_all:
368368
Change all of the tables's sft matching the given regex:
369369
hbase\> change\_sft\_all 't.\*','FILE'
370370
hbase\> change\_sft\_all 'ns:.\*','FILE'
@@ -375,7 +375,7 @@ change\_sft\_all:
375375

376376
* [HBASE-26742](https://issues.apache.org/jira/browse/HBASE-26742) | *Major* | **Comparator of NOT\_EQUAL NULL is invalid for checkAndMutate**
377377

378-
The semantics of checkAndPut for null(or empty) value comparator is changed, the old match is always true.
378+
The semantics of checkAndPut for null(or empty) value comparator is changed, the old match is always true.
379379
But we should consider that EQUAL or NOT\_EQUAL for null check is a common usage, so the semantics of checkAndPut for matching null is correct now.
380380
There is rare use of LESS or GREATER null, so keep the semantics for them.
381381

@@ -471,7 +471,7 @@ Now we will upload the site artifacts to nightlies for nightly build as well as
471471

472472
* [HBASE-26316](https://issues.apache.org/jira/browse/HBASE-26316) | *Minor* | **Per-table or per-CF compression codec setting overrides**
473473

474-
It is now possible to specify codec configuration options as part of table or column family schema definitions. The configuration options will only apply to the defined scope. For example:
474+
It is now possible to specify codec configuration options as part of table or column family schema definitions. The configuration options will only apply to the defined scope. For example:
475475

476476
hbase\> create 'sometable', \\
477477
{ NAME =\> 'somefamily', COMPRESSION =\> 'ZSTD' }, \\
@@ -781,7 +781,7 @@ belong to system RSGroup only.
781781

782782
* [HBASE-25902](https://issues.apache.org/jira/browse/HBASE-25902) | *Critical* | **Add missing CFs in meta during HBase 1 to 2.3+ Upgrade**
783783

784-
While upgrading cluster from 1.x to 2.3+ versions, after the active master is done setting it's status as 'Initialized', it attempts to add 'table' and 'repl\_barrier' CFs in meta. Once CFs are added successfully, master is aborted with PleaseRestartMasterException because master has missed certain initialization events (e.g ClusterSchemaService is not initialized and tableStateManager fails to migrate table states from ZK to meta due to missing CFs). Subsequent active master initialization is expected to be smooth.
784+
While upgrading cluster from 1.x to 2.3+ versions, after the active master is done setting it's status as 'Initialized', it attempts to add 'table' and 'repl\_barrier' CFs in meta. Once CFs are added successfully, master is aborted with PleaseRestartMasterException because master has missed certain initialization events (e.g ClusterSchemaService is not initialized and tableStateManager fails to migrate table states from ZK to meta due to missing CFs). Subsequent active master initialization is expected to be smooth.
785785
In the presence of multi masters, when one of them becomes active for the first time after upgrading to HBase 2.3+, it is aborted after fixing CFs in meta and one of the other backup masters will take over and become active soon. Hence, overall this is expected to be smooth upgrade if we have backup masters configured. If not, operator is expected to restart same master again manually.
786786

787787

@@ -1053,7 +1053,7 @@ Expose HBCK repost results in metrics, includes: "orphanRegionsOnRS", "orphanReg
10531053

10541054
* [HBASE-25582](https://issues.apache.org/jira/browse/HBASE-25582) | *Major* | **Support setting scan ReadType to be STREAM at cluster level**
10551055

1056-
Adding a new meaning for the config 'hbase.storescanner.pread.max.bytes' when configured with a value \<0.
1056+
Adding a new meaning for the config 'hbase.storescanner.pread.max.bytes' when configured with a value \<0.
10571057
In HBase 2.x we allow the Scan op to specify a ReadType (PREAD / STREAM/ DEFAULT). When Scan comes with DEFAULT read type, we will start scan with preads and later switch to stream read once we see we are scanning a total data size \> value of hbase.storescanner.pread.max.bytes. (This is calculated for data per region:cf). This config defaults to 4 x of HFile block size = 256 KB by default.
10581058
This jira added a new meaning for this config when configured with a -ve value. In such case, for all scans with DEFAULT read type, we will start with STREAM read itself. (Switch at begin of the scan itself)
10591059

hbase-hadoop2-compat/pom.xml

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -82,8 +82,8 @@ limitations under the License.
8282
-->
8383
<groupId>javax.activation</groupId>
8484
<artifactId>javax.activation-api</artifactId>
85-
<scope>runtime</scope>
8685
<version>${javax.activation.version}</version>
86+
<scope>runtime</scope>
8787
</dependency>
8888
<dependency>
8989
<groupId>org.apache.commons</groupId>
@@ -199,7 +199,7 @@ limitations under the License.
199199
</dependencies>
200200
</profile>
201201
<!--
202-
profile for building against Hadoop 3.0.x. Activate using:
202+
profile for building against Hadoop 3.x. Activate using:
203203
mvn -Dhadoop.profile=3.0
204204
-->
205205
<profile>
@@ -215,10 +215,10 @@ limitations under the License.
215215
<groupId>org.apache.hadoop</groupId>
216216
<artifactId>hadoop-common</artifactId>
217217
</dependency>
218-
<dependency>
219-
<groupId>org.apache.hadoop</groupId>
220-
<artifactId>hadoop-mapreduce-client-core</artifactId>
221-
</dependency>
218+
<dependency>
219+
<groupId>org.apache.hadoop</groupId>
220+
<artifactId>hadoop-mapreduce-client-core</artifactId>
221+
</dependency>
222222
</dependencies>
223223
</profile>
224224
<!-- Skip the tests in this module -->

hbase-mapreduce/pom.xml

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -235,6 +235,11 @@
235235
<artifactId>log4j-1.2-api</artifactId>
236236
<scope>test</scope>
237237
</dependency>
238+
<dependency>
239+
<groupId>org.bouncycastle</groupId>
240+
<artifactId>bcprov-jdk15on</artifactId>
241+
<scope>test</scope>
242+
</dependency>
238243
</dependencies>
239244

240245
<build>
@@ -340,7 +345,7 @@
340345
</dependencies>
341346
</profile>
342347
<!--
343-
profile for building against Hadoop 3.0.x. Activate using:
348+
profile for building against Hadoop 3.x. Activate using:
344349
mvn -Dhadoop.profile=3.0
345350
-->
346351
<profile>

hbase-shaded/hbase-shaded-testing-util/pom.xml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -189,7 +189,7 @@
189189
</dependencies>
190190
</profile>
191191
<!--
192-
profile for building against Hadoop 3.0.0. Activate using:
192+
profile for building against Hadoop 3.x. Activate using:
193193
mvn -Dhadoop.profile=3.0
194194
-->
195195
<profile>

pom.xml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1886,26 +1886,26 @@
18861886
<plugin>
18871887
<groupId>org.codehaus.mojo</groupId>
18881888
<artifactId>flatten-maven-plugin</artifactId>
1889-
<!--<version>1.3.0</version>-->
1889+
<version>1.3.0</version>
18901890
<configuration>
18911891
<embedBuildProfileDependencies>true</embedBuildProfileDependencies>
18921892
</configuration>
18931893
<executions>
18941894
<!-- enable flattening -->
18951895
<execution>
18961896
<id>flatten</id>
1897-
<phase>process-resources</phase>
18981897
<goals>
18991898
<goal>flatten</goal>
19001899
</goals>
1900+
<phase>process-resources</phase>
19011901
</execution>
19021902
<!-- ensure proper cleanup -->
19031903
<execution>
19041904
<id>flatten.clean</id>
1905-
<phase>clean</phase>
19061905
<goals>
19071906
<goal>clean</goal>
19081907
</goals>
1908+
<phase>clean</phase>
19091909
</execution>
19101910
</executions>
19111911
</plugin>

0 commit comments

Comments
 (0)