Skip to content

Commit

Permalink
update optimistic transaction (pingcap#2824)
Browse files Browse the repository at this point in the history
* update optimistic transaction

* fix typos

* align the second sentence

* Apply suggestions from code review

Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com>

Co-authored-by: Keke Yi <40977455+yikeke@users.noreply.github.com>
Co-authored-by: Jack Yu <yusp@pingcap.com>
Co-authored-by: ti-srebot <66930949+ti-srebot@users.noreply.github.com>
  • Loading branch information
4 people authored Jun 15, 2020
1 parent 6ecb66f commit 67930dd
Showing 1 changed file with 14 additions and 43 deletions.
57 changes: 14 additions & 43 deletions optimistic-transaction.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,35 +7,36 @@ aliases: ['/docs/dev/reference/transactions/transaction-optimistic/','/docs/dev/

# TiDB Optimistic Transaction Model

This document introduces the principles of TiDB's optimistic transaction model. This document assumes that you have a basic understanding of [TiDB architecture](/architecture.md), [Percolator](https://ai.google/research/pubs/pub36726), and the [ACID](/glossary.md#acid) properties of transactions.
This document introduces the principles of TiDB's optimistic transaction model and related features.

In TiDB's optimistic transaction model, the two-phase commit begins right after the client executes the `COMMIT` statement. Therefore, the write-write conflict can be observed before the transactions are actually committed.
TiDB uses the optimistic transaction model by default. In TiDB's optimistic transaction model, for write-write conflicts, the two-phase commit begins only when the transaction is committed.

> **Note:**
>
> Starting from v3.0.8, TiDB uses the [pessimistic transaction model](/pessimistic-transaction.md) by default. However, this does not affect your clusters if you upgrading from v3.0.7 or earlier to v3.0.8 (and later). In other words, **only newly created clusters default to using the pessimistic transaction model**.
> Starting from v3.0.8, newly created TiDB clusters use the [pessimistic transaction model](/pessimistic-transaction.md) by default. However, this does not affect your existing cluster if you upgrade it from v3.0.7 or earlier to v3.0.8 or later. In other words, **only newly created clusters default to using the pessimistic transaction model**.
## Principles of optimistic transactions

TiDB adopts Google's Percolator transaction model, a variant of two-phase commit (2PC) to ensure the correct completion of a distributed transaction. The procedure is as follows:
To support distributed transactions, TiDB adopts two-phase commit (2PC) in optimistic transactions. The procedure is as follows:

![2PC in TiDB](/media/2pc-in-tidb.png)

1. The client begins a transaction.

TiDB receives the start version number (monotonically increasing in time and globally unique) from PD and mark it as `start_ts`.
TiDB gets a timestamp (monotonically increasing in time and globally unique) from PD as the unique transaction ID of the current transaction, which is called `start_ts`. TiDB implements multi-version concurrency control, so `start_ts` also serves as the version of the database snapshot obtained by this transaction. This means that the transaction can only read the data from the database at `start_ts`.

2. The client issues a read request.

1. TiDB receives routing information (how data is distributed among TiKV nodes) from PD.
2. TiDB receives the data of the `start_ts` version from TiKV.

3. The client issues a write request.

TiDB checks whether the written data satisfies consistency constraints (to ensure the data types are correct and the unique index is met etc.) **Valid data is stored in the memory**.
TiDB checks whether the written data satisfies constraints (to ensure the data types are correct, the NOT NULL constraint is met, etc.). **Valid data is stored in the private memory of this transaction in TiDB**.

4. The client issues a commit request.

5. TiDB begins 2PC to ensure the atomicity of distributed transactions and persist data in store.
5. TiDB begins 2PC, and persist data in store while guaranteeing the atomicity of transactions.

1. TiDB selects a Primary Key from the data to be written.
2. TiDB receives the information of Region distribution from PD, and groups all keys by Region accordingly.
Expand All @@ -60,18 +61,16 @@ From the process of transactions in TiDB above, it is clear that TiDB transactio
However, TiDB transactions also have the following disadvantages:

* Transaction latency due to 2PC
* In need of a centralized version manager
* In need of a centralized timestamp allocation service
* OOM (out of memory) when extensive data is written in the memory

To avoid potential problems in application, refer to [transaction sizes](/transaction-overview.md#transaction-size) to see more details.

## Transaction retries

TiDB uses optimistic concurrency control by default whereas MySQL applies pessimistic concurrency control. This means that MySQL checks for conflicts during the execution of SQL statements, so there are few errors reported in heavy contention scenarios. For the convenience of MySQL users, TiDB provides a retry function that runs inside a transaction.
In the optimistic transaction model, transactions might fail to be committed because of write–write conflict in heavy contention scenarios. TiDB uses optimistic concurrency control by default, whereas MySQL applies pessimistic concurrency control. This means that MySQL adds locks during SQL execution, and its Repeatable Read isolation level allows for non-repeatable reads, so commits generally do not encounter exceptions. To lower the difficulty of adapting applications, TiDB provides an internal retry mechanism.

### Automatic retry

If there is a conflict, TiDB retries the write operations automatically. You can set `tidb_disable_txn_auto_retry` and `tidb_retry_limit` to enable or disable this default function:
If a write-write conflict occurs during the transaction commit, TiDB automatically retries the SQL statement that includes write operations. You can enable the automatic retry by setting `tidb_disable_txn_auto_retry` to `off` and set the retry limit by configuring `tidb_retry_limit`:

```toml
# Whether to disable automatic retry. ("on" by default)
Expand Down Expand Up @@ -113,7 +112,7 @@ You can enable the automatic retry in either session level or global level:

> **Note:**
>
> The `tidb_retry_limit` variable decides the maximum number of retries. When this variable is set to `0`, none of the transactions automatically retries, including the implicit single statement transactions that are automatically committed. This is the way to completely disable the automatic retry mechanism in TiDB. After the automatic retry is disabled, all conflicting transactions report failures (includes the `try again later` string) to the application layer in the fastest way.
> The `tidb_retry_limit` variable decides the maximum number of retries. When this variable is set to `0`, none of the transactions automatically retries, including the implicit single statement transactions that are automatically committed. This is the way to completely disable the automatic retry mechanism in TiDB. After the automatic retry is disabled, all conflicting transactions report failures (including the `try again later` message) to the application layer in the fastest way.

### Limits of retry

Expand All @@ -131,37 +130,9 @@ If your application can tolerate lost updates, and does not require `REPEATABLE

## Conflict detection

For the optimistic transaction, it is important to detect whether there are write-write conflicts in the underlying data. Although TiKV reads data for detection **in the prewrite phase**, a conflict pre-detection is also performed in the TiDB clusters to improve the efficiency.

Because TiDB is a distributed database, the conflict detection in the memory is performed in two layers:

* The TiDB layer. If a write-write conflict in the instance is observed after the primary write is issued, it is unnecessary to issue the subsequent writes to the TiKV layer.
* The TiKV layer. TiDB instances are unaware of each other, which means they cannot confirm whether there are conflicts or not. Therefore, the conflict detection is mainly performed in the TiKV layer.

The conflict detection in the TiDB layer is disabled by default. The specific configuration items are as follows:

```toml
[txn-local-latches]
# Whether to enable the latches for transactions. Recommended
# to use latches when there are many local transaction conflicts.
# ("false" by default)
enabled = false
# Controls the number of slots corresponding to Hash. ("204800" by default)
# It automatically adjusts upward to an exponential multiple of 2.
# Each slot occupies 32 Bytes of memory. If set too small,
# it might result in slower running speed and poor performance
# when data writing covers a relatively large range.
capacity = 2048000
```

The value of the `capacity` configuration item mainly affects the accuracy of conflict detection. During conflict detection, only the hash value of each key is stored in the memory. Because the probability of collision when hashing is closely related to the probability of misdetection, you can configure `capacity` to controls the number of slots and enhance the accuracy of conflict detection.

* The smaller the value of `capacity`, the smaller the occupied memory and the greater the probability of misdetection.
* The larger the value of `capacity`, the larger the occupied memory and the smaller the probability of misdetection.

When you confirm that there is no write-write conflict in the upcoming transactions (such as importing data), it is recommended to disable the function of conflict detection.
As a distributed database, TiDB performs in-memory conflict detection in the TiKV layer, mainly in the prewrite phase. TiDB instances are stateless and unaware of each other, which means they cannot know whether their writes result in conflicts across the cluster. Therefore, conflict detection is performed in the TiKV layer.

TiKV also uses a similar mechanism to detect conflicts, but the conflict detection in the TiKV layer cannot be disabled. You can only configure `scheduler-concurrency` to control the number of slots that defined by the modulo operation:
The configuration is as follows:

```toml
# Controls the number of slots. ("2048000" by default)
Expand Down

0 comments on commit 67930dd

Please sign in to comment.