-
Notifications
You must be signed in to change notification settings - Fork 688
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
reference: add PD scheduling best practice and glossary
- Loading branch information
anotherrachel
committed
Nov 15, 2019
1 parent
e7e0336
commit b1fa73e
Showing
2 changed files
with
317 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,67 @@ | ||
--- | ||
title: Glossary | ||
summary: Glossaries about TiDB. | ||
category: glossary | ||
--- | ||
|
||
# Glossary | ||
|
||
## L | ||
|
||
### Leader/Follower/Learner | ||
|
||
Leader/Follower/Learner each corresponds to three roles in a group of [Peers](#regionpeerraft-group). The Leader services all client requests and replicated data to the Followers. If the group Leader fails, one of the Followers will be elected as the new Leader. Learners are non-voting Followers that only synchronizes raft logs, and currently exists briefly in the process of replica addition. | ||
|
||
## O | ||
|
||
### Operator | ||
|
||
An Operator is a collection of actions that applies to a Region and serves a scheduling purpose. For example, "migrate Region 2 Leader to Store 5", "migrate a replica of Region 2 to Store 1, 4, 5". | ||
|
||
An Operator can be computed and generated by a Scheduler, or created by an external API. | ||
|
||
### Operator Step | ||
|
||
An Operator Step is a step in the execution of an Operator. An Operator normally contains multiple Operator steps. | ||
|
||
Currently, available Steps generated by PD include: | ||
|
||
- `TransferLeader`: migrate a Region Leader to a specified Peer | ||
- `AddPeer`: add Followers to a specified Store | ||
- `RemovePeer`: delete a Region Peer | ||
- `AddLearner`: add a Region Learner to a specified Store | ||
- `PromoteLearner`: promote a specified Learner to a voting member | ||
- `SplitRegion`: split a Region in two | ||
|
||
## P | ||
|
||
### `Pending`/`Down` | ||
|
||
`Pending` and `Down` are two special states of Peer. `Pending` indicates that the raft log of Followers or Learners is vastly different from that of Leader, and Followers in `Pending` cannot be elected as Leader. `Down` refers to a state that a Peer ceases to respond to the corresponding Leader for a long time, which usually means that the corresponding node is down or isolated from the network. | ||
|
||
## R | ||
|
||
### Region/Peer/Raft Group | ||
|
||
Each Region maintains a continuous piece of data for the cluster (an average of about 96 MiB by default), each of which stores multiple replicas in different Stores (3 replicas by default) and each replica is referred as a Peer. Multiple Peers of the same Region synchronize data via raft protocol, so Peers also refer to members of a raft instance. TiKV uses the multi-raft pattern to manage data, that is, each Region has a corresponding, standalone raft instance (also known as a Raft Group). | ||
|
||
### Region Split | ||
|
||
Regions in the TiKV cluster are gradually split and generated as the written data accrues. The process of splitting is called Region Split. | ||
|
||
The mechanism of Region Split is to build an initial Region to cover the entire key space in cluster initialization, and then generate a new Region through Split every time the Region data reaches a certain amount. | ||
|
||
## S | ||
|
||
### Scheduler | ||
|
||
Scheduler is a component in PD that generates scheduling tasks. Each scheduler in PD runs independently and serves different purposes. Common schedulers and their purposes are: | ||
|
||
- `balance-leader-scheduler`: maintain Leader balance of different nodes | ||
- `balance-region-scheduler`: maintain Peer balance of different nodes | ||
- `hot-region-scheduler`: maintain hot Region balance of different nodes | ||
- `evict-leader-{store-id}`: remove all Leaders of a node (often used for rolling upgrades) | ||
|
||
### Store | ||
|
||
A Store in PD refers to the storage node in the cluster, also a instance of tikv-server. Each Store has a corresponding TiKV instance, which means if multiple TiKV instances are deployed on the same host or even on the same disk, these instances still correspond to different Stores. |
Oops, something went wrong.