Skip to content
Open
Show file tree
Hide file tree
Changes from 16 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 85 additions & 9 deletions follower-read.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,15 +5,36 @@

# Follower Read

When a read hotspot appears in a Region, the Region leader can become a read bottleneck for the entire system. In this situation, enabling the Follower Read feature can significantly reduce the load of the leader, and improve the throughput of the whole system by balancing the load among multiple followers. This document introduces the use and implementation mechanism of Follower Read.
In TiDB, to ensure high availability and data safety, TiKV stores multiple replicas for each Region, one of which is the leader and the others are followers. By default, all read and write requests are processed by the leader. The Follower Read feature enables TiDB to read data from follower replicas of a Region while maintaining strong consistency, thereby reducing the read workload on the leader and improving the overall read throughput of the cluster.

## Overview
<CustomContent platform="tidb">

When performing Follower Read, TiDB selects an appropriate replica based on the topology information. Specifically, TiDB uses the `zone` label to identify local replicas: if the `zone` label of a TiDB node is the same as that of the target TiKV node, TiDB considers the replica as a local replica. For more information, see [Schedule Replicas by Topology Labels](/schedule-replicas-by-topology-labels.md).

</CustomContent>

<CustomContent platform="tidb-cloud">

When performing Follower Read, TiDB selects an appropriate replica based on the topology information. Specifically, TiDB uses the `zone` label to identify local replicas: if the `zone` label of a TiDB node is the same as that of the target TiKV node, TiDB considers the replica as a local replica. The `zone` label is set automatically in TiDB Cloud.

</CustomContent>

By enabling followers to handle read requests, Follower Read achieves the following goals:

- Distribute read hotspots and reduce the leader workload.
- Prioritize local replica reads in multi-AZ or multi-datacenter deployments to minimize cross-AZ traffic.

The Follower Read feature refers to using any follower replica of a Region to serve a read request under the premise of strongly consistent reads. This feature improves the throughput of the TiDB cluster and reduces the load of the leader. It contains a series of load balancing mechanisms that offload TiKV read loads from the leader replica to the follower replica in a Region. TiKV's Follower Read implementation provides users with strongly consistent reads.
## Usage scenarios

Follower Read is suitable for the following scenarios:

- Applications with heavy read requests or significant read hotspots.
- Multi-AZ deployments where you want to prioritize reading from local replicas to reduce cross-AZ bandwidth usage.
- Read-write separation architectures that you want to further improve overall read performance.

> **Note:**
>
> To achieve strongly consistent reads, the follower node currently needs to request the current execution progress from the leader node (that is `ReadIndex`), which causes an additional network request overhead. Therefore, the main benefits of Follower Read are to isolate read requests from write requests in the cluster and to increase overall read throughput.
> To ensure strong consistency of the read results, Follower Read communicates with the leader before reading to confirm the latest commit progress (by executing the Raft `ReadIndex` operation). This introduces an additional network interaction. Therefore, Follower Read is most effective where a large number of read requests exist or read-write isolation is required. However, for low-latency single queries, the performance improvement might not be significant.
## Usage

Expand All @@ -29,7 +50,24 @@

Default: leader

This variable is used to set the expected data read mode.
This variable defines the expected data read mode. Starting from v8.5.4, this variable only takes effect on read-only SQL statements.

In scenarios where you need to reduce cross-AZ traffic by reading from local replicas, the following configurations are recommended:

- `leader`: the default value, providing the best performance.
- `closest-adaptive`: minimizes cross-AZ traffic while keeping performance loss to a minimum.
- `closest-replicas`: maximizes cross-AZ traffic savings but might cause some performance degradation.

If you are using other configurations, refer to the following table to modify them to the recommended configurations:

| Current configuration | Recommended configuration |
| ------------- | ------------- |
| `follower` | `closest-replicas` |
| `leader-and-follower` | `closest-replicas` |
| `prefer-leader` | `closest-adaptive` |
| `learner` | `closest-replicas` |

If you want to use a more precise read replica selection policy, refer to the full list of available configurations as follows:

- When you set the value of `tidb_replica_read` to `leader` or an empty string, TiDB maintains its default behavior and sends all read operations to the leader replica to perform.
- When you set the value of `tidb_replica_read` to `follower`, TiDB selects a follower replica of the Region to perform read operations. If the Region has learner replicas, TiDB also considers them for reads with the same priority. If no available follower or learner replicas exist for the current Region, TiDB reads from the leader replica.
Expand All @@ -56,18 +94,56 @@
</CustomContent>

## Basic monitoring

> **Note**
>
> This section is only applicable to TiDB Self-Managed.
<CustomContent platform="tidb">

You can check the [**TiDB** > **KV Request** > **Read Req Traffic** panel (New in v8.5.4)](/grafana-tidb-dashboard.md#kv-request) to determine whether to enable Follower Read and observe the traffic reduction effect after enabling it.

</CustomContent>

<CustomContent platform="tidb-cloud">

You can check the [**TiDB** > **KV Request** > **Read Req Traffic** panel (New in v8.5.4)](https://docs.pingcap.com/tidb/stable/grafana-tidb-dashboard#kv-request) to determine whether to enable Follower Read and observe the traffic reduction effect after enabling it.

</CustomContent>

## Implementation mechanism

Before the Follower Read feature was introduced, TiDB applied the strong leader principle and submitted all read and write requests to the leader node of a Region to handle. Although TiKV can distribute Regions evenly on multiple physical nodes, for each Region, only the leader can provide external services. The other followers can do nothing to handle read requests but receive the data replicated from the leader at all times and prepare for voting to elect a leader in case of a failover.
Before the Follower Read feature was introduced, TiDB applied the strong leader principle and submitted all read and write requests to the leader node of a Region to handle. Although TiKV can distribute Regions evenly on multiple physical nodes, for each Region, only the leader can provide external services. The other followers cannot handle read requests, and they only receive the data replicated from the leader at all times and prepare for voting to elect a leader in case of a failover.

To allow data reading in the follower node without violating linearizability or affecting Snapshot Isolation in TiDB, the follower node needs to use `ReadIndex` of the Raft protocol to ensure that the read request can read the latest data that has been committed on the leader. At the TiDB level, the Follower Read feature simply needs to send the read request of a Region to a follower replica based on the load balancing policy.
Follower Read includes a set of load balancing mechanisms that offload TiKV read requests from the leader replica to a follower replica in a Region. To allow data reading from the follower node without violating linearizability or affecting Snapshot Isolation in TiDB, the follower node needs to use `ReadIndex` of the Raft protocol to ensure that the read request can read the latest data that has been committed on the leader node. At the TiDB level, the Follower Read feature simply needs to send the read request of a Region to a follower replica based on the load balancing policy.

### Strongly consistent reads

When the follower node processes a read request, it first uses `ReadIndex` of the Raft protocol to interact with the leader of the Region, to obtain the latest commit index of the current Raft group. After the latest commit index of the leader is applied locally to the follower, the processing of a read request starts.

![read-index-flow](/media/follower-read/read-index.png)

### Follower replica selection strategy

Because the Follower Read feature does not affect TiDB's Snapshot Isolation transaction isolation level, TiDB adopts the round-robin strategy to select the follower replica. Currently, for the coprocessor requests, the granularity of the Follower Read load balancing policy is at the connection level. For a TiDB client connected to a specific Region, the selected follower is fixed, and is switched only when it fails or the scheduling policy is adjusted.
The Follower Read feature does not affect TiDB's Snapshot Isolation transaction isolation level. TiDB selects a replica based on the `tidb_replica_read` configuration for the first read attempt. From the second retry onward, TiDB prioritizes ensuring successful reads. Therefore, when the selected follower node becomes inaccessible or has other errors, TiDB switches to the leader for service.

#### `leader`

- Always selects the leader replica for reads, regardless of its location.

#### `closest-replicas`

- When the replica in the same AZ as TiDB is the leader node, TiDB does not perform Follower Read from it.
- When the replica in the same AZ as TiDB is a follower node, TiDB performs Follower Read from it.

#### `closest-adaptive`

- If the estimated result is not large enough, TiDB uses the `leader` policy and does not perform Follower Read.

Check warning on line 142 in follower-read.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [PingCAP.Ambiguous] Consider using a clearer word than 'enough' because it may cause confusion. Raw Output: {"message": "[PingCAP.Ambiguous] Consider using a clearer word than 'enough' because it may cause confusion.", "location": {"path": "follower-read.md", "range": {"start": {"line": 142, "column": 40}}}, "severity": "INFO"}
- If the estimated result is large enough, TiDB uses the `closest-replicas` policy.

Check warning on line 143 in follower-read.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [PingCAP.Ambiguous] Consider using a clearer word than 'enough' because it may cause confusion. Raw Output: {"message": "[PingCAP.Ambiguous] Consider using a clearer word than 'enough' because it may cause confusion.", "location": {"path": "follower-read.md", "range": {"start": {"line": 143, "column": 36}}}, "severity": "INFO"}

### Follower Read performance overhead

To ensure strong data consistency, Follower Read performs a `ReadIndex` operation regardless of how much data is read, which inevitably consumes additional TiKV CPU resources. Therefore, in small-query scenarios (such as point queries), the performance loss of Follower Read is relatively more obvious. Moreover, because the traffic reduced by local reads for small queries is limited, Follower Read is more recommended for large queries or batch reading scenarios.

Check warning on line 147 in follower-read.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [PingCAP.Ambiguous] Consider using a clearer word than 'much' because it may cause confusion. Raw Output: {"message": "[PingCAP.Ambiguous] Consider using a clearer word than 'much' because it may cause confusion.", "location": {"path": "follower-read.md", "range": {"start": {"line": 147, "column": 101}}}, "severity": "INFO"}

However, for the non-coprocessor requests, such as a point query, the granularity of the Follower Read load balancing policy is at the transaction level. For a TiDB transaction on a specific Region, the selected follower is fixed, and is switched only when it fails or the scheduling policy is adjusted. If a transaction contains both point queries and coprocessor requests, the two types of requests are scheduled for reading separately according to the preceding scheduling policy. In this case, even if a coprocessor request and a point query are for the same Region, TiDB processes them as independent events.
When `tidb_replica_read` is set to `closest-adaptive`, TiDB does not perform Follower Read for small queries. As a result, under various workloads, the additional CPU overhead on TiKV is typically no more than 10% compared with the `leader` policy.
7 changes: 6 additions & 1 deletion grafana-tidb-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,9 +123,14 @@ The following metrics relate to requests sent to TiKV. Retry requests are counte
- **local**: the number of requests per second that attempt a stale read in the local zone
- Stale Read Req Traffic:
- **cross-zone-in**: the incoming traffic of responses to requests that attempt a stale read in a remote zone
- **cross-zone-out**: the outgoing traffic of requests that attempt a stale read in a remote zone
- **cross-zone-out**: the outgoing traffic of responses to requests that attempt a stale read in a remote zone
- **local-in**: the incoming traffic of responses to requests that attempt a stale read in the local zone
- **local-out**: the outgoing traffic of requests that attempt a stale read in the local zone
- Read Req Traffic
- **leader-local**: traffic generated by Leader Read processing read requests in the local zone
- **leader-cross-zone**: traffic generated by Leader Read processing read requests in a remote zone
- **follower-local**: traffic generated by Follower Read processing read requests in the local zone
- **follower-cross-zone**: traffic generated by Follower Read processing read requests in a remote zone

### PD Client

Expand Down
Binary file added media/follower-read/read-index.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion system-variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -5375,7 +5375,7 @@ SHOW WARNINGS;
- Type: Enumeration
- Default value: `leader`
- Possible values: `leader`, `follower`, `leader-and-follower`, `prefer-leader`, `closest-replicas`, `closest-adaptive`, and `learner`. The `learner` value is introduced in v6.6.0.
- This variable is used to control where TiDB reads data.
- This variable is used to control where TiDB reads data. Starting from v8.5.4, this variable only takes effect on read-only SQL statements.
- For more details about usage and implementation, see [Follower read](/follower-read.md).

### tidb_restricted_read_only <span class="version-mark">New in v5.2.0</span>
Expand Down