Skip to content

Commit

Permalink
Merge branch 'master' into analyze-max-ts
Browse files Browse the repository at this point in the history
  • Loading branch information
qw4990 authored Jul 20, 2022
2 parents 64a5da2 + dff77a9 commit e1811dd
Show file tree
Hide file tree
Showing 18 changed files with 1,363 additions and 859 deletions.
2 changes: 0 additions & 2 deletions .golangci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ run:
linters:
disable-all: true
enable:
- misspell
- ineffassign
- typecheck
- varcheck
Expand All @@ -20,7 +19,6 @@ linters:
- bodyclose
- exportloopref
- rowserrcheck
- unconvert
- makezero
- durationcheck
- prealloc
Expand Down
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ dev: checklist check explaintest gogenerate br_unit_test test_part_parser_dev ut
# Install the check tools.
check-setup:tools/bin/revive tools/bin/goword

check: fmt check-parallel lint tidy testSuite check-static vet errdoc
check: check-parallel lint tidy testSuite check-static errdoc

fmt:
@echo "gofmt (simplify)"
Expand Down
2 changes: 1 addition & 1 deletion ddl/ddl.go
Original file line number Diff line number Diff line change
Expand Up @@ -277,9 +277,9 @@ func (dc *ddlCtx) setDDLSourceForDiagnosis(job *model.Job) {
ctx, exists := dc.jobCtx.jobCtxMap[job.ID]
if !exists {
ctx = NewJobContext()
ctx.setDDLLabelForDiagnosis(job)
dc.jobCtx.jobCtxMap[job.ID] = ctx
}
ctx.setDDLLabelForDiagnosis(job)
}

func (dc *ddlCtx) getResourceGroupTaggerForTopSQL(job *model.Job) tikvrpc.ResourceGroupTagger {
Expand Down
190 changes: 190 additions & 0 deletions docs/design/2022-07-20-session-manager.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
# Proposal: Session Manager

- Author(s): [djshow832](https://github.com/djshow832)
- Tracking Issue: https://github.com/pingcap/tidb/issues/35258

## Abstract

This proposes a design of a TiDB component called Session Manager. It keeps client connections alive while the TiDB server upgrades, restarts, scales in, and scales out.

## Background

Applications generally connect to TiDB through a connection pool to reduce the overhead of creating connections. Connections in the pool are kept alive, thus TiDB has to disconnect the client connections during shutdown. This causes reconnections and QPS jitters on the application side when the TiDB cluster rolling upgrades, restarts, and scales in. Thus, database administrators sometimes operate TiDB clusters when the QPS reaches the bottom, typically in the middle of the night, which is painful.

Besides, TiDB needs to be upgraded transparently in the TiDB Cloud Dev Tier once the latest version is ready. The current situation makes TiDB impossible to upgrade without affecting users.

Therefore, we propose a new TiDB component, called Session Manager. Applications or load balancers connect to the Session Manager instead of TiDB. The Session Manager keeps the session states of current connections and redirects the session to alive TiDB instances automatically when a TiDB instance is down.

![session manager component](./imgs/session-manager-component.png)

### Goal

- When the TiDB cluster performs upgrades or restarts, the Session Manager redirects the backend connections from inactive TiDB instances to active instances. This is especially important on the Dev Tier because TiDB will be upgraded frequently and automatically.
- When the TiDB cluster scales out, the Session Manager is aware of the new TiDB instances and redirects some backend connections to the new instances. This is important in a serverless architecture.
- When the TiDB cluster scales in, the Session Manager waits for the ongoing transactions to be finished and redirects some backend connections to active TiDB instances.

### Non-Goals

- When a TiDB instance fails accidentally, the Session Manager redirects the backend connections from the failed TiDB instance to an active instance.
- Block list, allow list, traffic control, audit logs.

## Proposal

### Deployment

In the cloud, applications typically connect to the Network Load Balancer (NLB), which balances the traffic to the TiDB cluster. Session Manager is placed between the NLB and the TiDB cluster.

The NLB balances the traffic to the Session Manager, and the Session Manager balances the traffic to the TiDB cluster. Most of the time, Session Manager only forwards messages between the NLB and the TiDB instances.

The Session Manager also needs to be highly available. An easy way is to deploy multiple isolated Session Manager instances. However, it's painful to maintain. For example, when a user wants to modify a configuration, he needs to connect to the proxies one by one. What we need is a Session Manager cluster.

![session manager deployment](./imgs/session-manager-deployment.png)

Client addresses should be recorded in slow logs, audit logs, TiDB logs, and processlist to enable users to check the source of requests. Besides, users may configure different privileges for different IPs. However, from the viewpoint of TiDB, the client address is the address of the Session Manager. Some proxies use the [Proxy Protocol](https://www.haproxy.com/blog/using-haproxy-with-the-proxy-protocol-to-better-secure-your-database/) to pass the client address to the server and TiDB also supports the Proxy Protocol. Session Manager will also use the Proxy Protocol in the handshake phase.

### TiDB Instance Discovery

Traditional proxies require users to configure the addresses of TiDB instances. When the TiDB cluster scales out, scales in, or switches to another TiDB cluster, the user needs to reconfigure it in the proxies.

A Session Manager instance is deployed independently with other TiDB components. To connect to the TiDB cluster, the PD addresses should be passed to the Session Manager before startup. PD contains an etcd server, containing all the instance addresses in the cluster. The Session Manager watches the etcd key to detect a new TiDB instance. This is just like what TiDB instances do.

The Session Manager should also do a health check on the TiDB instance to ensure it is alive, and migrate the backend connections to other TiDB instances if it is down. The health check is achieved by trying to connect the MySQL protocol port, just like other proxies.

Session Manager can do various health checks on TiDB instances:

- It can observe the CPU and memory usage of TiDB instances so that it can perform load-based balance.
- It can observe whether the latest schema is fetched on TiDB instances so that it can avoid directing client connections to the TiDB instances that cannot serve requests.

When a TiDB instance needs to be shut down gracefully due to scale-in, upgrading, or restarting, no more new connections are accepted. The health check from the Session Manager will fail and the Session Manager no longer routes new connections to the instance. However, it still waits for the ongoing queries to be finished since the instance is still alive.

When a TiDB instance quits accidentally, the ongoing queries fail immediately and the Session Manager redirects the connections.

![session manager health check](./imgs/session-manager-health-check.png)

### Authentication

When the Session Manager migrates a session, it needs to authenticate with the new TiDB server.

It's unsafe to save user passwords in the Session Manager, so we use a token-based authentication:

1. The administrator places a self-signed certificate on each TiDB server. The certificate and key paths are defined by global variables `tidb_auth_signing_cert` and `tidb_auth_signing_key`. The certificates on all the servers are the same so that a message encrypted by one server can be decrypted by another.
2. When the Session Manager is going to migrate a session from one TiDB instance to another, it queries the session token. The session token is composed by the username, token expiration time, and a signature. The signature is signed with the private key of the certificate.
3. The Session Manager then authenticates with the new TiDB server with a new auth-plugin. The session token acts as the password. The new server checks the username, token expiration time, and the signature. The signature should be verified by the public key.

To ensure security, TiDB needs to guarantee that:

- The certificate rotates periodically so that it minimizes the effect of leaking the certificate.
- The username in the session token must be the same with the that in the handshake packet, so one user cannot log in with the identity of another user.
- The token expiration time cannot exceed the current time and the token lifetime cannot be too long, so one user cannot forge the token by brute force, or use a valid token for a long time.
- The signature should be verified so that TiDB can make sure the data is not forged.
- Secure transport is enforced when querying the session token because the token cannot be leaked.

### Connection State Maintenance

A MySQL connection is stateful. TiDB maintains a session state for each connection, including session variables, transaction states, and prepared statements. If the Session Manager redirects the frontend query from one backend connection to another without restoring the session state in the new connection, an error may occur.

The basic workflow is as follows:

1. When the client queries from the Session Manager, the Session Manager forwards the commands to TiDB and then forwards the query result from TiDB to the client. The session states are only updated by TiDB.
2. When the Session Manager is going to migrate a session from one TiDB instance to another, it queries the session states from the original TiDB instance and saves them. Session Manager queries session states by sending `SHOW SESSION_STATES`, the result of which is in JSON type.
3. The Session Manager then connects to the new TiDB instance and replays the session states by sending `SET SESSION_STATES '{...}'`, the parameter of which is just the result of `SHOW SESSION_STATES`.

Session states include:

- Session variables. All the session variables are replayed, because the default values may be different between TiDB instances.
- Prepared statements with their IDs. The prepared statements are created in either binary or text protocol.
- Session SQL bindings.
- User-defined variables.
- The current database.
- Last insert ID, found rows, and row count for the last query, as well as last query info, last transaction info, last DDL info.
- Last sequence values.
- Last warning and error messages.

Transactions are hard to be restored, so Session Manager doesn't support restoring a transaction. Session Manager must wait until the current transaction finishes or the TiDB instance exits due to shut down timeout. To be aware of whether the session has an active transaction, Session Manager needs to track the transaction status. This can be achieved by parsing the status flag in the response packets.

Similarly, Session Manager doesn't support restoring a result set. If the client uses a cursor to read, Session Manager must wait until the data is all fetched. Session Manager can parse the request and response packets to know whether the prepared statement is using a cursor and whether all the data is fetched.

Besides, there are some other limitations:

- When the session contains local temporary tables, table locks, or advisory locks, the TiDB won't return the session states and Session Manager will report connection failure.
- For long-run queries, such as `ADD INDEX` and `LOAD DATA`, TiDB probably won't wait until they finish. In this case, the client will be disconnected.
- Session Manager needs to reconnect to the new TiDB, which introduces handshake, authentication, and session states initialization, so there will be a performance jitter during the redirection.
- The session-level plan cache on the new TiDB instance is empty, so there will be a slight performance jitter for a while after the redirection.

### Configuration

For static configurations, they are read before the startup of the Session Manager and cannot be changed online, such as the port. These configurations can be set by command line parameters.

For dynamic configurations, it's unacceptable to restart Session Manager to set them because Session Manager is supposed to be always online. The configurations can be overwritten anytime and take effect on the whole cluster. These configurations can be stored on an etcd server, which is deployed on the same machine as the Session Manager. Each Session Manager instance watches the etcd key to update the configurations in time.

Session Manager provides an HTTP API to update dynamic configurations online, just as the other components do.

### Observability

Session Manager is one of the products in the TiDB ecosystem, so it's reasonable to integrate Session Manager with Grafana and TiDB-Dashboard.

Like the other components, Session Manager also reports metrics to Prometheus. The metrics include but are not limited to:

- The CPU and memory of each Session Manager instance
- The number of successful and failed session migrations
- The latency and QPS of queries

TiDB-Dashboard should be able to fetch the logs and profiling data of each Session Manager instance.

To troubleshoot the Session Manager, Session Manager provides an HTTP API to fetch instance-scoped or global-scoped data, such as:

- The processlist on the Session Manager instance
- The available TiDB instances from the viewpoint of the Session Manager

## Compatibility

### Upgrade Compatibility

To avoid upgrading Session Manager, Session Manager is supposed to be simple and stable enough.

However, we still can never guarantee that Session Manager will be bug free, so it still needs to support rolling upgrade. Once upgrading, the client connections will definitely be disconnected.

### MySQL Compatibility

Session Manager connects to the MySQL protocol port of TiDB servers, so it should be compatible with MySQL.

## Test Plans

Session Manager is an essential component of the query path, so it's very important to ensure its stability.

We have lots of cases to test, including:

- Test various ORM and connectors with all versions. The MySQL protocol, especially the authentication part, is different among those versions.
- Test various L4 Proxies. Different proxies use different methods to check the health of Session Manager.
- Test various statements, including randomly generated statements and MySQL tests.
- Test all scenarios tests that we have, and randomly make Session Manager redirect sessions at any time.

## Alternative Proposals

Traditional SQL proxies typically maintain the session states on themselves, rather than by backend SQL servers. They parse every response packet, or even request packet, to incrementally update the session states.

This is also possible for Session Manager. MySQL supports [`CLIENT_SESSION_TRACK` capability](https://dev.mysql.com/doc/internals/en/packet-OK_Packet.html#cs-sect-packet-ok-sessioninfo), which is also used for session migration. MySQL server is able to send human-readable state information and some predefined session states in the OK packet when the session states are changed.

The most significant advantage of this method is that Session Manager can support fail over. Now that Session Manager has all the up-to-date session states, it can migrate sessions anytime, even if the TiDB instance fails accidentally.

However, there are some drawbacks of this method:

- There are only 4 predefined state types, and the type is encoded in an `int<1>`. However, TiDB has tens of state types, some of which are TiDB-specific. We cannot extend the state types because it will break forward compatibility if MySQL adds more state types in the future.
- For some changes, e.g. user-defined variables, the OK packet just notifies the client that there is a change, but doesn't tell it what the change is. We also need to extend the protocol, which is a risk.
- There are some inevitable limitations of fail over. For example, Session Manager will never know whether the statement succeeds when TiDB doesn't respond to a `COMMIT` statement or an auto-commit DML statement.
- To support fail over, Session Manager should also be capable of reconnecting to a new Server without a fresh token. That means Session Manager may need to obtain user passwords in a certain way.

## Future Work

The most attractive scenario of routing client connections is multi-tenancy.

These are some scenarios where multi-tenancy is useful:

- Separate different businesses (or workloads) to achieve resource isolation. Each business is assigned a tenant.
- Multiple users share a TiDB cluster to save cost.

![session manager multi-tenancy](./imgs/session-manager-multi-tenancy.png)

In this architecture, the NLB is not aware of tenants. Each TiDB instance belongs to only one tenant to isolate resources. Thus, it's Session Manager's responsibility to route sessions to different TiDB instances.

Session Manager can distinguish tenants by the SNI servers.
Binary file added docs/design/imgs/session-manager-component.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/design/imgs/session-manager-deployment.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/design/imgs/session-manager-health-check.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 5 additions & 0 deletions executor/BUILD.bazel
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ go_library(
"//parser/model",
"//parser/mysql",
"//parser/terror",
"//parser/types",
"//planner",
"//planner/core",
"//planner/util",
Expand Down Expand Up @@ -215,6 +216,7 @@ go_library(
"@com_github_tikv_client_go_v2//oracle",
"@com_github_tikv_client_go_v2//tikv",
"@com_github_tikv_client_go_v2//tikvrpc",
"@com_github_tikv_client_go_v2//txnkv",
"@com_github_tikv_client_go_v2//txnkv/txnlock",
"@com_github_tikv_client_go_v2//txnkv/txnsnapshot",
"@com_github_tikv_client_go_v2//util",
Expand Down Expand Up @@ -342,6 +344,7 @@ go_test(
"//kv",
"//meta",
"//meta/autoid",
"//metrics",
"//parser",
"//parser/ast",
"//parser/auth",
Expand Down Expand Up @@ -419,6 +422,8 @@ go_test(
"@com_github_pingcap_sysutil//:sysutil",
"@com_github_pingcap_tipb//go-binlog",
"@com_github_pingcap_tipb//go-tipb",
"@com_github_prometheus_client_golang//prometheus",
"@com_github_prometheus_client_model//go",
"@com_github_prometheus_common//model",
"@com_github_stretchr_testify//require",
"@com_github_tikv_client_go_v2//oracle",
Expand Down
4 changes: 2 additions & 2 deletions executor/analyzetest/analyze_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -85,10 +85,10 @@ PARTITION BY RANGE ( a ) (
require.Len(t, statsTbl.Columns, 3)
require.Len(t, statsTbl.Indices, 1)
for _, col := range statsTbl.Columns {
require.Greater(t, col.Len()+col.Num(), 0)
require.Greater(t, col.Len()+col.TopN.Num(), 0)
}
for _, idx := range statsTbl.Indices {
require.Greater(t, idx.Len()+idx.Num(), 0)
require.Greater(t, idx.Len()+idx.TopN.Num(), 0)
}
}

Expand Down
2 changes: 1 addition & 1 deletion executor/show.go
Original file line number Diff line number Diff line change
Expand Up @@ -1359,7 +1359,7 @@ func appendPartitionInfo(partitionInfo *model.PartitionInfo, buf *bytes.Buffer,
}
}
// this if statement takes care of lists/range columns case
if partitionInfo.Columns != nil {
if len(partitionInfo.Columns) > 0 {
// partitionInfo.Type == model.PartitionTypeRange || partitionInfo.Type == model.PartitionTypeList
// Notice that MySQL uses two spaces between LIST and COLUMNS...
fmt.Fprintf(buf, "\nPARTITION BY %s COLUMNS(", partitionInfo.Type.String())
Expand Down
Loading

0 comments on commit e1811dd

Please sign in to comment.