Skip to content

Commit

Permalink
Merge pull request kubernetes#1388 from zhxcai/typos
Browse files Browse the repository at this point in the history
Automatic merge from submit-queue.

Update a lot of typos
  • Loading branch information
Kubernetes Submit Queue authored Nov 10, 2017
2 parents 3907591 + e4db35f commit f6b7936
Show file tree
Hide file tree
Showing 8 changed files with 22 additions and 22 deletions.
24 changes: 12 additions & 12 deletions contributors/design-proposals/instrumentation/events-redesign.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,13 +100,13 @@ We'd like to propose following structure in Events object in the new events API

```golang
type Event struct {
// <type and object metadata>
// <type and object metadata>

// Time when this Event was first observed.
EventTime metav1.MicroTime

// Data about the Event series this event represents or nil if it's
// a singleton Event.
// a singleton Event.
// +optional
Series *EventSeries

Expand All @@ -123,7 +123,7 @@ type Event struct {
Reason string

// The object this Event is “about”. In most cases it's the object that the
// given controller implements.
// given controller implements.
// +optional
Regarding ObjectReference

Expand All @@ -132,24 +132,24 @@ type Event struct {
Related *ObjectReference

// Human readable description of the Event. Possibly discarded when and
// Event series is being deduplicated.
// Event series is being deduplicated.
// +optional
Note string

// Type of this event (Normal, Warning), new types could be added in the
// future.
// +optional
Type string
// future.
// +optional
Type string
}

type EventSeries struct {
Count int32
LastObservedTime MicroTime
State EventSeriesState
Count int32
LastObservedTime MicroTime
State EventSeriesState
}

const (
EventSeriesStateOngoing = "Ongoing"
EventSeriesStateOngoing = "Ongoing"
EventSeriesStateFinished = "Finished"
EventSeriesStateUnknown = "Unknown"
)
Expand All @@ -161,7 +161,7 @@ EventSeriesStateOngoing = "Ongoing"
| ----------| -------| -------| --------------------|---------|
| Node X | BecameUnreachable | HeartbeatTooOld | kubernetes.io/node-ctrl | <nil> |
| Node Y | FailedToAttachVolume | Unknown | kubernetes.io/pv-attach-ctrl | PVC X |
| ReplicaSet X | FailedToInstantiantePod | QuotaExceeded | kubernetes.io/replica-set-ctrl | <nil> |
| ReplicaSet X | FailedToInstantiatePod | QuotaExceeded | kubernetes.io/replica-set-ctrl | <nil> |
| ReplicaSet X | InstantiatedPod | | kubernetes.io/replica-set-ctrl | Pod Y |
| Ingress X | CreatedLoadBalancer | | kubernetes.io/ingress-ctrl | <nil> |
| Pod X | ScheduledOn | | kubernetes.io/scheduler | Node Y |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ how careful we need to be.
### Huge number of handshakes slows down API server

It was a long standing issue for performance and is/was an important bottleneck for scalability (https://github.com/kubernetes/kubernetes/issues/13671). The bug directly
causing this problem was incorrect (from the golangs standpoint) handling of TCP connections. Secondary issue was that elliptic curve encryption (only one available in go 1.4)
causing this problem was incorrect (from the golang's standpoint) handling of TCP connections. Secondary issue was that elliptic curve encryption (only one available in go 1.4)
is unbelievably slow.

## Proposed metrics/statistics to gather/compute to avoid problems
Expand All @@ -42,7 +42,7 @@ is unbelievably slow.
Basic ideas:
- number of Pods/ReplicationControllers/Services in the cluster
- number of running replicas of master components (if they are replicated)
- current elected master of ectd cluster (if running distributed version)
- current elected master of etcd cluster (if running distributed version)
- number of master component restarts
- number of lost Nodes

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Use cases which are not listed below are out of the scope of MVP version of Reso

HPA uses the latest value of cpu usage as an average aggregated across 1 minute
(the window may change in the future). The data for a given set of pods
(defined either by pod list or label selector) should be accesible in one request
(defined either by pod list or label selector) should be accessible in one request
due to performance issues.

#### Scheduler
Expand Down
6 changes: 3 additions & 3 deletions contributors/design-proposals/multicluster/federation-lite.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ service of type LoadBalancer. The native cloud load-balancers on both AWS &
GCE are region-level, and support load-balancing across instances in multiple
zones (in the same region). For both clouds, the behaviour of the native cloud
load-balancer is reasonable in the face of failures (indeed, this is why clouds
provide load-balancing as a primitve).
provide load-balancing as a primitive).

For multi-AZ clusters we will therefore simply rely on the native cloud provider
load balancer behaviour, and we do not anticipate substantial code changes.
Expand Down Expand Up @@ -170,8 +170,8 @@ GCE. If you had two volumes both named `myvolume` in two different GCE zones,
this would not be ambiguous when Kubernetes is operating only in a single zone.
But, when operating a cluster across multiple zones, `myvolume` is no longer
sufficient to specify a volume uniquely. Worse, the fact that a volume happens
to be unambigious at a particular time is no guarantee that it will continue to
be unambigious in future, because a volume with the same name could
to be unambiguous at a particular time is no guarantee that it will continue to
be unambiguous in future, because a volume with the same name could
subsequently be created in a second zone. While perhaps unlikely in practice,
we cannot automatically enable multi-AZ clusters for GCE users if this then causes
volume mounts to stop working.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ For the 1.4 release, this feature will be implemented for the GCE cloud provider

- Node: On the node, we expect to see the real source IP of the client. Destination IP will be the Service Virtual External IP.

- Pod: For processes running inside the Pod network namepsace, the source IP will be the real client source IP. The destination address will the be Pod IP.
- Pod: For processes running inside the Pod network namespace, the source IP will be the real client source IP. The destination address will the be Pod IP.

#### GCE Expected Packet Destination IP (HealthCheck path)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ There are two types of configuration that are stored on disk:
- cached configurations from a remote source, e.g. `ConfigMaps` from etcd.
- the local "init" configuration, e.g. the set of config files the node is provisioned with.

The Kubelet should accept a `--dynamic-config-dir` flag, which specifies a directory for storing all of the information necessary for dynamic configuraiton from remote sources; e.g. the cached configurations, which configuration is currently in use, which configurations are known to be bad, etc.
The Kubelet should accept a `--dynamic-config-dir` flag, which specifies a directory for storing all of the information necessary for dynamic configuration from remote sources; e.g. the cached configurations, which configuration is currently in use, which configurations are known to be bad, etc.
- When the Kubelet downloads a `ConfigMap`, it will checkpoint a serialization of the `ConfigMap` object to a file at `{dynamic-config-dir}/checkpoints/{UID}`.
- We checkpoint the entire object, rather than unpacking the contents to disk, because the former is less complex and reduces chance for errors during the checkpoint process.

Expand Down
2 changes: 1 addition & 1 deletion contributors/design-proposals/node/kubelet-cri-logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ clients attaching as well.

There are ad-hoc solutions/discussions that addresses one or two of the
requirements, but no comprehensive solution for CRI specifically has been
proposed so far (with the excpetion of @tmrtfs's proposal
proposed so far (with the exception of @tmrtfs's proposal
[#33111](https://github.com/kubernetes/kubernetes/pull/33111), which has a much
wider scope). It has come up in discussions that kubelet can delegate all the
logging management to the runtime to allow maximum flexibility. However, it is
Expand Down
2 changes: 1 addition & 1 deletion contributors/design-proposals/node/kubelet-rkt-runtime.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ In addition, the rkt cli has historically been the primary interface to the rkt

The initial integration will execute the rkt binary directly for app creation/start/stop/removal, as well as image pulling/removal.

The creation of pod sanbox is also done via rkt command line, but it will run under `systemd-run` so it's monitored by the init process.
The creation of pod sandbox is also done via rkt command line, but it will run under `systemd-run` so it's monitored by the init process.

In the future, some of these decisions are expected to be changed such that rkt is vendored as a library dependency for all operations, and other init systems will be supported as well.

Expand Down

0 comments on commit f6b7936

Please sign in to comment.