Skip to content

Commit

Permalink
Update wording in overview and readme (pingcap#305)
Browse files Browse the repository at this point in the history
  • Loading branch information
lilin90 authored and QueenyJin committed Dec 20, 2017
1 parent 91e78dd commit 8abec5b
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 16 deletions.
20 changes: 10 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ TiDB (The pronunciation is: /'taɪdiːbi:/ tai-D-B, etymology: titanium) is a Hy

- __Horizontal scalability__
- __Compatible with MySQL protocol__
- __Automatic Failover and high availability__
- __Automatic failover and high availability__
- __Consistent distributed transactions__
- __Online DDL__
- __Multiple storage engine support__
Expand All @@ -141,33 +141,33 @@ Read the [Roadmap](https://github.com/pingcap/docs/blob/master/ROADMAP.md).
- **Stack Overflow**: https://stackoverflow.com/questions/tagged/tidb
- **Mailing list**: [Google Group](https://groups.google.com/forum/#!forum/tidb-user)

## TiDB Architecture
## TiDB architecture

To better understand TiDB’s features, you need to understand the TiDB architecture.

![image alt text](media/tidb-architecture.png)

The TiDB cluster has three components: the TiDB Server, the PD Server, and the TiKV server.
The TiDB cluster has three components: the TiDB server, the PD server, and the TiKV server.

### TiDB Server
### TiDB server

The TiDB server is in charge of the following operations:

1. Receiving the SQL requests

2. Processing the SQL related logics

3. Locating the TiKV address for storing and computing data through the Placement Driver (PD)
3. Locating the TiKV address for storing and computing data through Placement Driver (PD)

4. Exchanging data with TiKV

5. Returning the result

The TiDB server is stateless. It doesn’t store data and it is for computing only. TiDB is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5.
The TiDB server is stateless. It does not store data and it is for computing only. TiDB is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5.

### Placement Driver Server
### Placement Driver server

Placement Driver (PD) is the managing component of the entire cluster and is in charge of the following three operations:
The Placement Driver (PD) server is the managing component of the entire cluster and is in charge of the following three operations:

1. Storing the metadata of the cluster such as the region location of a specific key.

Expand All @@ -177,9 +177,9 @@ Placement Driver (PD) is the managing component of the entire cluster and is in

As a cluster, PD needs to be deployed to an odd number of nodes. Usually it is recommended to deploy to 3 online nodes at least.

### TiKV Server
### TiKV server

TiKV server is responsible for storing data. From an external view, TiKV is a distributed transactional Key-Value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. There are multiple Regions in each TiKV node. TiKV uses the Raft protocol for replication to ensure the data consistency and disaster recovery. The replicas of the same Region on different nodes compose a Raft Group. The load balancing of the data among different TiKV nodes are scheduled by PD. Region is also the basic unit for scheduling the load balance.
The TiKV server is responsible for storing data. From an external view, TiKV is a distributed transactional Key-Value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. There are multiple Regions in each TiKV node. TiKV uses the Raft protocol for replication to ensure the data consistency and disaster recovery. The replicas of the same Region on different nodes compose a Raft Group. The load balancing of the data among different TiKV nodes are scheduled by PD. Region is also the basic unit for scheduling the load balance.

## Features

Expand Down
12 changes: 6 additions & 6 deletions overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ TiDB (The pronunciation is: /'taɪdiːbi:/ tai-D-B, etymology: titanium) is a Hy

- __Horizontal scalability__
- __Compatible with MySQL protocol__
- __Automatic Failover and high availability__
- __Automatic failover and high availability__
- __Consistent distributed transactions__
- __Online DDL__
- __Multiple storage engine support__
Expand Down Expand Up @@ -40,7 +40,7 @@ To better understand TiDB’s features, you need to understand the TiDB architec

![image alt text](media/tidb-architecture.png)

The TiDB cluster has three components: the TiDB Server, the PD Server, and the TiKV server.
The TiDB cluster has three components: the TiDB server, the PD server, and the TiKV server.

### TiDB server

Expand All @@ -50,17 +50,17 @@ The TiDB server is in charge of the following operations:

2. Processing the SQL related logics

3. Locating the TiKV address for storing and computing data through the Placement Driver (PD)
3. Locating the TiKV address for storing and computing data through Placement Driver (PD)

4. Exchanging data with TiKV

5. Returning the result

The TiDB server is stateless. It doesn’t store data and it is for computing only. TiDB is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5.
The TiDB server is stateless. It does not store data and it is for computing only. TiDB is horizontally scalable and provides the unified interface to the outside through the load balancing components such as Linux Virtual Server (LVS), HAProxy, or F5.

### Placement Driver server

Placement Driver (PD) is the managing component of the entire cluster and is in charge of the following three operations:
The Placement Driver (PD) server is the managing component of the entire cluster and is in charge of the following three operations:

1. Storing the metadata of the cluster such as the region location of a specific key.

Expand All @@ -72,7 +72,7 @@ As a cluster, PD needs to be deployed to an odd number of nodes. Usually it is r

### TiKV server

TiKV server is responsible for storing data. From an external view, TiKV is a distributed transactional Key-Value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. There are multiple Regions in each TiKV node. TiKV uses the Raft protocol for replication to ensure the data consistency and disaster recovery. The replicas of the same Region on different nodes compose a Raft Group. The load balancing of the data among different TiKV nodes are scheduled by PD. Region is also the basic unit for scheduling the load balance.
The TiKV server is responsible for storing data. From an external view, TiKV is a distributed transactional Key-Value storage engine. Region is the basic unit to store data. Each Region stores the data for a particular Key Range which is a left-closed and right-open interval from StartKey to EndKey. There are multiple Regions in each TiKV node. TiKV uses the Raft protocol for replication to ensure the data consistency and disaster recovery. The replicas of the same Region on different nodes compose a Raft Group. The load balancing of the data among different TiKV nodes are scheduled by PD. Region is also the basic unit for scheduling the load balance.

## Features

Expand Down

0 comments on commit 8abec5b

Please sign in to comment.