Skip to content

Commit d2a85e1

Browse files
committed
HBASE-27516 Document the table based replication queue storage in ref guide
1 parent 772acaa commit d2a85e1

File tree

1 file changed

+64
-11
lines changed

1 file changed

+64
-11
lines changed

src/main/asciidoc/_chapters/ops_mgt.adoc

Lines changed: 64 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -2429,17 +2429,13 @@ This option was introduced in link:https://issues.apache.org/jira/browse/HBASE-1
24292429
==== Replication Internals
24302430

24312431
Replication State Storage::
2432-
In HBASE-15867, we abstract two interfaces for storing replication state,
2433-
`ReplicationPeerStorage` and `ReplicationQueueStorage`. The former one is for storing the
2434-
replication peer related states, and the latter one is for storing the replication queue related
2435-
states.
2436-
HBASE-15867 is only half done, as although we have abstract these two interfaces, we still only
2437-
have zookeeper based implementations.
2432+
And in HBASE-27110, we have implemented a file system based replication peer storage, to store replication peer state on file system. Of course you can still use the zookeeper based replication peer storage.
2433+
And in HBASE-27109, we have changed the replication queue storage from zookeeper based to hbase table based. See the below `Replication Queue State` in hbase:replication table section for more details.
24382434

24392435
Replication State in ZooKeeper::
24402436
By default, the state is contained in the base node _/hbase/replication_.
2441-
Usually this nodes contains two child nodes, the `peers` znode is for storing replication peer
2442-
state, and the `rs` znodes is for storing replication queue state.
2437+
Usually this nodes contains two child nodes, the peers znode is for storing replication peer state, and the rs znodes is for storing replication queue state. And if you choose the file system based replication peer storage, you will not see the peers znode.
2438+
And starting from 3.0.0, we have moved the replication queue state to <<hbase:replication,hbase:replication>> table, so you will not see the rs znode.
24432439

24442440
The `Peers` Znode::
24452441
The `peers` znode is stored in _/hbase/replication/peers_ by default.
@@ -2454,6 +2450,12 @@ The `RS` Znode::
24542450
The child znode name is the region server's hostname, client port, and start code.
24552451
This list includes both live and dead region servers.
24562452

2453+
[[hbase:replication]]
2454+
The hbase:replication Table::
2455+
After 3.0.0, the `Queue` has been stored in the hbase:replication table, where the row key is <PeerId>-<ServerName>[/<SourceServerName>], the WAL group will be the qualifier, and the serialized ReplicationGroupOffset will be the value.
2456+
The ReplicationGroupOffset includes the wal file of the corresponding queue (<PeerId>-<ServerName>[/<SourceServerName>]) and its offset.
2457+
Because we track replication offset per queue instead of per file, we only need to store one replication offset per queue.
2458+
24572459
Other implementations for `ReplicationPeerStorage`::
24582460
Starting from 2.6.0, we introduce a file system based `ReplicationPeerStorage`, which stores
24592461
the replication peer state with files on HFile file system, instead of znodes on ZooKeeper.
@@ -2473,7 +2475,7 @@ A ZooKeeper watcher is placed on the _${zookeeper.znode.parent}/rs_ node of the
24732475
This watch is used to monitor changes in the composition of the slave cluster.
24742476
When nodes are removed from the slave cluster, or if nodes go down or come back up, the master cluster's region servers will respond by selecting a new pool of slave region servers to replicate to.
24752477

2476-
==== Keeping Track of Logs
2478+
==== Keeping Track of Logs(based on ZooKeeper)
24772479

24782480
Each master cluster region server has its own znode in the replication znodes hierarchy.
24792481
It contains one znode per peer cluster (if 5 slave clusters, 5 znodes are created), and each of these contain a queue of WALs to process.
@@ -2494,6 +2496,18 @@ If the log is in the queue, the path will be updated in memory.
24942496
If the log is currently being replicated, the change will be done atomically so that the reader doesn't attempt to open the file when has already been moved.
24952497
Because moving a file is a NameNode operation , if the reader is currently reading the log, it won't generate any exception.
24962498

2499+
==== Keeping Track of Logs(based on hbase table)
2500+
2501+
After 3.0.0, for table based implementation, we have server name in row key, which means we will have lots of rows for a given peer.
2502+
2503+
For a normal replication queue, the WAL files belong to the region server that is still alive, all the WAL files are kept in memory, so we do not need to get the WAL files from replication queue storage.
2504+
And for a recovered replication queue, we could get the WAL files of the dead region server by listing the old WAL directory on HDFS. So theoretically, we do not need to store every WAL file in replication queue storage.
2505+
And what’s more, we store the created time(usually) in the WAL file name, so for all the WAL files in a WAL group, we can sort them(actually we will sort them in the current replication framework), which means we only need to store one replication offset per queue.
2506+
When starting a recovered replication queue, we will skip all the files before this offset, and start replicating from this offset.
2507+
2508+
For ReplicationLogCleaner, all the files before this offset can be deleted, otherwise not.
2509+
2510+
24972511
==== Reading, Filtering and Sending Edits
24982512

24992513
By default, a source attempts to read from a WAL and ship log entries to a sink as quickly as possible.
@@ -2523,8 +2537,8 @@ NOTE: WALs are saved when replication is enabled or disabled as long as peers ex
25232537

25242538
When no region servers are failing, keeping track of the logs in ZooKeeper adds no value.
25252539
Unfortunately, region servers do fail, and since ZooKeeper is highly available, it is useful for managing the transfer of the queues in the event of a failure.
2526-
2527-
Each of the master cluster region servers keeps a watcher on every other region server, in order to be notified when one dies (just as the master does). When a failure happens, they all race to create a znode called `lock` inside the dead region server's znode that contains its queues.
2540+
Each of the master cluster region servers keeps a watcher on every other region server, in order to be notified when one dies (just as the master does).
2541+
When a failure happens, they all race to create a znode called `lock` inside the dead region server's znode that contains its queues.
25282542
The region server that creates it successfully then transfers all the queues to its own znode, one at a time since ZooKeeper does not support renaming queues.
25292543
After queues are all transferred, they are deleted from the old location.
25302544
The znodes that were recovered are renamed with the ID of the slave cluster appended with the name of the dead server.
@@ -2533,6 +2547,11 @@ Next, the master cluster region server creates one new source thread per copied
25332547
The main difference is that those queues will never receive new data, since they do not belong to their new region server.
25342548
When the reader hits the end of the last log, the queue's znode is deleted and the master cluster region server closes that replication source.
25352549

2550+
And starting from 2.5.0, the failover logic has been moved to SCP, where we add a SERVER_CRASH_CLAIM_REPLICATION_QUEUES step in SCP to claim the replication queues for a dead server.
2551+
And starting from 3.0.0, where we changed the replication queue storage from zookeeper to table, the update to the replication queue storage is async, so we also need an extra step to add the missing replication queues before claiming.
2552+
2553+
==== The replication queue claiming(based on ZooKeeper)
2554+
25362555
Given a master cluster with 3 region servers replicating to a single slave with id `2`, the following hierarchy represents what the znodes layout could be at some point in time.
25372556
The region servers' znodes all contain a `peers` znode which contains a single queue.
25382557
The znode names in the queues represent the actual file names on HDFS in the form `address,port.timestamp`.
@@ -2610,6 +2629,32 @@ The new layout will be:
26102629
1.1.1.2,60020.1312 (Contains a position)
26112630
----
26122631

2632+
==== The replication queue claiming(based on hbase table)
2633+
2634+
Given a master cluster with 3 region servers replicating to a single slave with id `2`, the following info represents what the storage layout of queue in the hbase:replication at some point in time.
2635+
Row key is <PeerId>-<ServerName>[/<SourceServerName>], and value is WAL && Offset.
2636+
2637+
----
2638+
2639+
<PeerId>-<ServerName>[/<SourceServerName>] WAL && Offset
2640+
2-1.1.1.1,60020,123456780 1.1.1.1,60020.1234 (Contains a position)
2641+
2-1.1.1.2,60020,123456790 1.1.1.2,60020.1214 (Contains a position)
2642+
2-1.1.1.3,60020,123456630 1.1.1.3,60020.1280 (Contains a position)
2643+
----
2644+
2645+
Assume that 1.1.1.2 failed.
2646+
The survivors will claim queue of that, and, arbitrarily, 1.1.1.3 wins.
2647+
It will claim all the queue of 1.1.1.2, including removing the row of a replication queue, and inserting a new row(where we change the server name to the region server which claims the queue).
2648+
Finally ,the layout will look like the following:
2649+
2650+
----
2651+
2652+
<PeerId>-<ServerName>[/<SourceServerName>] WAL && Offset
2653+
2-1.1.1.1,60020,123456780 1.1.1.1,60020.1234 (Contains a position)
2654+
2-1.1.1.3,60020,123456630 1.1.1.3,60020.1280 (Contains a position)
2655+
2-1.1.1.3,60020,123456630 1.1.1.2,60020,123456790 1.1.1.2,60020.1214 (Contains a position)
2656+
----
2657+
26132658
=== Replication Metrics
26142659

26152660
The following metrics are exposed at the global region server level and at the peer level:
@@ -2694,6 +2739,14 @@ The following metrics are exposed at the global region server level and at the p
26942739
| The directory for storing replication peer state, when filesystem replication
26952740
peer storage is specified
26962741
| peers
2742+
2743+
| hbase.replication.queue.table.name
2744+
| The table for storing replication queue state
2745+
| hbase:replication
2746+
2747+
| hbase.replication.queue.storage.impl
2748+
| The replication queue storage implementation
2749+
| TableReplicationQueueStorage
26972750
|===
26982751

26992752
=== Monitoring Replication Status

0 commit comments

Comments
 (0)