Skip to content

Commit

Permalink
Documentation: Fix heading hierarchy.
Browse files Browse the repository at this point in the history
Correct the hierarchy of Markdown symbols in document headings.
  • Loading branch information
Josh Wood committed Oct 20, 2015
1 parent 704bff0 commit 98bdeab
Show file tree
Hide file tree
Showing 17 changed files with 82 additions and 79 deletions.
2 changes: 1 addition & 1 deletion Documentation/04_to_2_snapshot_migration.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Snapshot Migration
# Snapshot Migration

You can migrate a snapshot of your data from a v0.4.9+ cluster into a new etcd 2.2 cluster using a snapshot migration. After snapshot migration, the etcd indexes of your data will change. Many etcd applications rely on these indexes to behave correctly. This operation should only be done while all etcd applications are stopped.

Expand Down
26 changes: 13 additions & 13 deletions Documentation/admin_guide.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
## Administration
# Administration

### Data Directory
## Data Directory

#### Lifecycle
### Lifecycle

When first started, etcd stores its configuration into a data directory specified by the data-dir configuration parameter.
Configuration is stored in the write ahead log and includes: the local member ID, cluster ID, and initial cluster configuration.
Expand All @@ -20,7 +20,7 @@ Once removed the member can be re-added with an empty data directory.

[remove-a-member]: runtime-configuration.md#remove-a-member

#### Contents
### Contents

The data directory has two sub-directories in it:

Expand All @@ -32,18 +32,18 @@ If `--wal-dir` flag is set, etcd will write the write ahead log files to the spe
[wal-pkg]: http://godoc.org/github.com/coreos/etcd/wal
[snap-pkg]: http://godoc.org/github.com/coreos/etcd/snap

### Cluster Management
## Cluster Management

#### Lifecycle
### Lifecycle

If you are spinning up multiple clusters for testing it is recommended that you specify a unique initial-cluster-token for the different clusters.
This can protect you from cluster corruption in case of mis-configuration because two members started with different cluster tokens will refuse members from each other.

#### Monitoring
### Monitoring

It is important to monitor your production etcd cluster for healthy information and runtime metrics.

##### Health Monitoring
#### Health Monitoring

At lowest level, etcd exposes health information via HTTP at `/health` in JSON format. If it returns `{"health": "true"}`, then the cluster is healthy. Please note the `/health` endpoint is still an experimental one as in etcd 2.2.

Expand All @@ -63,16 +63,16 @@ member fd422379fda50e48 is healthy: got healthy result from http://127.0.0.1:323
cluster is healthy
```

##### Runtime Metrics
#### Runtime Metrics

etcd uses [Prometheus](http://prometheus.io/) for metrics reporting in the server. You can read more through the runtime metrics [doc](metrics.md).

#### Debugging
### Debugging

Debugging a distributed system can be difficult. etcd provides several ways to make debug
easier.

##### Enabling Debug Logging
#### Enabling Debug Logging

When you want to debug etcd without stopping it, you can enable debug logging at runtime.
etcd exposes logging configuration at `/config/local/log`.
Expand All @@ -85,7 +85,7 @@ $ curl http://127.0.0.1:2379/config/local/log -XPUT -d '{"Level":"INFO"}'
$ # debug logging disabled
```

##### Debugging Variables
#### Debugging Variables

Debug variables are exposed for real-time debugging purposes. Developers who are familiar with etcd can utilize these variables to debug unexpected behavior. etcd exposes debug variables via HTTP at `/debug/vars` in JSON format. The debug variables contains
`cmdline`, `file_descriptor_limit`, `memstats` and `raft.status`.
Expand All @@ -107,7 +107,7 @@ Debug variables are exposed for real-time debugging purposes. Developers who are
}
```

#### Optimal Cluster Size
### Optimal Cluster Size

The recommended etcd cluster size is 3, 5 or 7, which is decided by the fault tolerance requirement. A 7-member cluster can provide enough fault tolerance in most cases. While larger cluster provides better fault tolerance the write performance reduces since data needs to be replicated to more machines.

Expand Down
4 changes: 2 additions & 2 deletions Documentation/branch_management.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Branch Management
# Branch Management

### Guide
## Guide

- New development occurs on the [master branch](https://github.com/coreos/etcd/tree/master)
- Master branch should always have a green build!
Expand Down
2 changes: 1 addition & 1 deletion Documentation/errorcode.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Error Code
# Error Code
======

This document describes the error code used in key space '/v2/keys'. Feel free to import 'github.com/coreos/etcd/error' to use.
Expand Down
16 changes: 8 additions & 8 deletions Documentation/glossary.md
Original file line number Diff line number Diff line change
@@ -1,35 +1,35 @@
## Glossary
# Glossary

This document defines the various terms used in etcd documentation, command line and source code.

### Node
## Node

Node is an instance of raft state machine.

It has a unique identification, and records other nodes' progress internally when it is the leader.

### Member
## Member

Member is an instance of etcd. It hosts a node, and provides service to clients.

### Cluster
## Cluster

Cluster consists of several members.

The node in each member follows raft consensus protocol to replicate logs. Cluster receives proposals from members, commits them and apply to local store.

### Peer
## Peer

Peer is another member of the same cluster.

### Proposal
## Proposal

A proposal is a request (for example a write request, a configuration change request) that needs to go through raft protocol.

### Client
## Client

Client is a caller of the cluster's HTTP API.

### Machine (deprecated)
## Machine (deprecated)

The alternative of Member in etcd before 2.0
2 changes: 1 addition & 1 deletion Documentation/libraries-and-tools.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Libraries and Tools
# Libraries and Tools

**Tools**

Expand Down
16 changes: 8 additions & 8 deletions Documentation/metrics.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Metrics
# Metrics

**NOTE: The metrics feature is considered as an experimental. We might add/change/remove metrics without warning in the future releases.**
**NOTE: The metrics feature is considered experimental. We may add/change/remove metrics without warning in future releases.**

etcd uses [Prometheus](http://prometheus.io/) for metrics reporting in the server. The metrics can be used for real-time monitoring and debugging.

Expand All @@ -13,7 +13,7 @@ The naming of metrics follows the suggested [best practice of Prometheus](http:/

etcd now exposes the following metrics:

### etcdserver
## etcdserver

| Name | Description | Type |
|-----------------------------------------|--------------------------------------------------|---------|
Expand All @@ -30,7 +30,7 @@ Pending proposal (`pending_proposal_total`) gives you an idea about how many pro

Failed proposals (`proposal_failed_total`) are normally related to two issues: temporary failures related to a leader election or longer duration downtime caused by a loss of quorum in the cluster.

### wal
## wal

| Name | Description | Type |
|------------------------------------|--------------------------------------------------|---------|
Expand All @@ -40,7 +40,7 @@ Failed proposals (`proposal_failed_total`) are normally related to two issues: t
Abnormally high fsync duration (`fsync_durations_microseconds`) indicates disk issues and might cause the cluster to be unstable.


### http requests
## http requests

These metrics describe the serving of requests (non-watch events) served by etcd members in non-proxy mode: total
incoming requests, request failures and processing latency (inc. raft rounds for storage). They are useful for tracking
Expand Down Expand Up @@ -71,7 +71,7 @@ Example Prometheus queries that may be useful from these metrics (across all etc

Show the 0.90-tile latency (in seconds) of read/write (respectively) event handling across all members, with a window of `5m`.

### snapshot
## snapshot

| Name | Description | Type |
|--------------------------------------------|------------------------------------------------------------|---------|
Expand All @@ -80,7 +80,7 @@ Example Prometheus queries that may be useful from these metrics (across all etc
Abnormally high snapshot duration (`snapshot_save_total_durations_microseconds`) indicates disk issues and might cause the cluster to be unstable.


### rafthttp
## rafthttp

| Name | Description | Type | Labels |
|-----------------------------------|--------------------------------------------|---------|--------------------------------|
Expand All @@ -99,7 +99,7 @@ Label `msgType` is the type of raft message. `MsgApp` is log replication message
Label `remoteID` is the member ID of the message destination.


### proxy
## proxy

etcd members operating in proxy mode do not do store operations. They forward all requests
to cluster instances.
Expand Down
6 changes: 3 additions & 3 deletions Documentation/other_apis.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Members API
# Members API

* [List members](#list-members)
* [Add a member](#add-a-member)
Expand Down Expand Up @@ -103,15 +103,15 @@ Change the peer urls of a given member. The member ID must be a hex-encoded uint

If the POST body is malformed an HTTP 400 will be returned. If the member does not exist in the cluster an HTTP 404 will be returned. If any of the given peerURLs exists in the cluster an HTTP 409 will be returned. If the cluster fails to process the request within timeout an HTTP 500 will be returned, though the request may be processed later.

#### Request
### Request

```
PUT /v2/members/<id> HTTP/1.1
{"peerURLs": ["http://10.0.0.10:2380"]}
```

#### Example
### Example

```sh
curl http://10.0.0.10:2379/v2/members/272e204152 -XPUT \
Expand Down
6 changes: 4 additions & 2 deletions Documentation/production-ready.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
# etcd in Production

etcd is being used successfully by many companies in production. It is,
however, under active development and systems like etcd are difficult to get
correct. If you are comfortable with bleeding-edge software please use etcd and
however, under active development, and systems like etcd are difficult to get
correct. If you are comfortable with bleeding-edge software, please use etcd and
provide us with the feedback and testing young software needs.
15 changes: 8 additions & 7 deletions Documentation/proxy.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Proxy
# Proxy

etcd can now run as a transparent proxy. Running etcd as a proxy allows for easy discovery of etcd within your infrastructure, since it can run on each machine as a local service. In this mode, etcd acts as a reverse proxy and forwards client requests to an active etcd cluster. The etcd proxy does not participate in the consensus replication of the etcd cluster, thus it neither increases the resilience nor decreases the write performance of the etcd cluster.

Expand All @@ -8,14 +8,14 @@ The proxy will shuffle the list of cluster members periodically to avoid sending

The member list used by proxy consists of all client URLs advertised within the cluster, as specified in each members' `-advertise-client-urls` flag. If this flag is set incorrectly, requests sent to the proxy are forwarded to wrong addresses and then fail. Including URLs in the `-advertise-client-urls` flag that point to the proxy itself, e.g. http://localhost:2379, is even more problematic as it will cause loops, because the proxy keeps trying to forward requests to itself until its resources (memory, file descriptors) are eventually depleted. The fix for this problem is to restart etcd member with correct `-advertise-client-urls` flag. After client URLs list in proxy is recalculated, which happens every 30 seconds, requests will be forwarded correctly.

### Using an etcd proxy
## Using an etcd proxy
To start etcd in proxy mode, you need to provide three flags: `proxy`, `listen-client-urls`, and `initial-cluster` (or `discovery`).

To start a readwrite proxy, set `-proxy on`; To start a readonly proxy, set `-proxy readonly`.

The proxy will be listening on `listen-client-urls` and forward requests to the etcd cluster discovered from in `initial-cluster` or `discovery` url.

#### Start an etcd proxy with a static configuration
### Start an etcd proxy with a static configuration
To start a proxy that will connect to a statically defined etcd cluster, specify the `initial-cluster` flag:

```
Expand All @@ -24,7 +24,7 @@ etcd -proxy on \
-initial-cluster infra0=http://10.0.1.10:2380,infra1=http://10.0.1.11:2380,infra2=http://10.0.1.12:2380
```

#### Start an etcd proxy with the discovery service
### Start an etcd proxy with the discovery service
If you bootstrap an etcd cluster using the [discovery service][discovery-service], you can also start the proxy with the same `discovery`.

To start a proxy using the discovery service, specify the `discovery` flag. The proxy will wait until the etcd cluster defined at the `discovery` url finishes bootstrapping, and then start to forward the requests.
Expand All @@ -35,10 +35,11 @@ etcd -proxy on \
-discovery https://discovery.etcd.io/3e86b59982e49066c5d813af1c2e2579cbf573de
```

#### Fallback to proxy mode with discovery service
## Fallback to proxy mode with discovery service

If you bootstrap a etcd cluster using [discovery service][discovery-service] with more than the expected number of etcd members, the extra etcd processes will fall back to being `readwrite` proxies by default. They will forward the requests to the cluster as described above. For example, if you create a discovery url with `size=5`, and start ten etcd processes using that same discovery url, the result will be a cluster with five etcd members and five proxies. Note that this behaviour can be disabled with the `proxy-fallback` flag.

### Promote a proxy to a member of etcd cluster
## Promote a proxy to a member of etcd cluster

A Proxy is in the part of etcd cluster that does not participant in consensus. A proxy will not promote itself to an etcd member that participants in consensus automtically in any case.

Expand All @@ -49,7 +50,7 @@ If you want to promote a proxy to an etcd member, there are four steps you need
- remove the existing proxy data directory
- restart the etcd process with new member configuration

#### Example
## Example

We assume you have a one member etcd cluster with one proxy. The cluster information is listed below:

Expand Down
4 changes: 2 additions & 2 deletions Documentation/reporting_bugs.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Reporting Bugs
# Reporting Bugs

If you find bugs or documentation mistakes in etcd project, please let us know by [opening an issue](https://github.com/coreos/etcd/issues/new). We treat bugs and mistakes very seriously and believe no issue is too small. Before creating a bug report, please check there that one does not already exist.

Expand All @@ -20,7 +20,7 @@ We might ask you for further information to locate a bug. A duplicated bug repor

## Frequently Asked Questions

### How to get stack trace
### How to get a stack trace

``` bash
$ kill -QUIT $PID
Expand Down
14 changes: 7 additions & 7 deletions Documentation/rfc/v3api.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Design
# Design

1. Flatten binary key-value space

Expand Down Expand Up @@ -32,9 +32,9 @@

[protobuf](./v3api.proto)

### Examples
## Examples

#### Put a key (foo=bar)
### Put a key (foo=bar)
```
// A put is always successful
Put( PutRequest { key = foo, value = bar } )
Expand All @@ -47,7 +47,7 @@ PutResponse {
}
```

#### Get a key (assume we have foo=bar)
### Get a key (assume we have foo=bar)
```
Get ( RangeRequest { key = foo } )
Expand All @@ -68,7 +68,7 @@ RangeResponse {
}
```

#### Range over a key space (assume we have foo0=bar0… foo100=bar100)
### Range over a key space (assume we have foo0=bar0… foo100=bar100)
```
Range ( RangeRequest { key = foo, end_key = foo80, limit = 30 } )
Expand Down Expand Up @@ -97,7 +97,7 @@ RangeResponse {
}
```

#### Finish a txn (assume we have foo0=bar0, foo1=bar1)
### Finish a txn (assume we have foo0=bar0, foo1=bar1)
```
Txn(TxnRequest {
// mod_revision of foo0 is equal to 1, mod_revision of foo1 is greater than 1
Expand Down Expand Up @@ -129,7 +129,7 @@ TxnResponse {
}
```

#### Watch on a key/range
### Watch on a key/range

```
Watch( WatchRequest{
Expand Down
6 changes: 3 additions & 3 deletions Documentation/runtime-configuration.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Runtime Reconfiguration
# Runtime Reconfiguration

etcd comes with support for incremental runtime reconfiguration, which allows users to update the membership of the cluster at run time.

Expand Down Expand Up @@ -131,7 +131,7 @@ The new member will run as a part of the cluster and immediately begin catching
If you are adding multiple members the best practice is to configure a single member at a time and verify it starts correctly before adding more new members.
If you add a new member to a 1-node cluster, the cluster cannot make progress before the new member starts because it needs two members as majority to agree on the consensus. You will only see this behavior between the time `etcdctl member add` informs the cluster about the new member and the new member successfully establishing a connection to the existing one.

#### Error Cases
#### Error Cases When Adding Members

In the following case we have not included our new host in the list of enumerated nodes.
If this is a new cluster, the node must be added to the list of initial cluster members.
Expand Down Expand Up @@ -162,7 +162,7 @@ etcd: this member has been permanently removed from the cluster. Exiting.
exit 1
```
#### Strict Reconfiguration Check Mode (`-strict-reconfig-check`)
### Strict Reconfiguration Check Mode (`-strict-reconfig-check`)
As described in the above, the best practice of adding new members is to configure a single member at a time and verify it starts correctly before adding more new members. This step by step approach is very important because if newly added members is not configured correctly (for example the peer URLs are incorrect), the cluster can lose quorum. The quorum loss happens since the newly added member are counted in the quorum even if that member is not reachable from other existing members. Also quorum loss might happen if there is a connectivity issue or there are operational issues.
Expand Down
Loading

0 comments on commit 98bdeab

Please sign in to comment.