Skip to content

Commit

Permalink
no to 'need to' part 2
Browse files Browse the repository at this point in the history
  • Loading branch information
kalexand-rh committed May 23, 2019
1 parent 463f333 commit 75e37cb
Show file tree
Hide file tree
Showing 36 changed files with 58 additions and 81 deletions.
2 changes: 1 addition & 1 deletion applications/deployments/deployment-strategies.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Consider the following when choosing a deployment strategy:
the application.
- If the application is a hybrid of microservices and traditional components,
downtime might be required to complete the transition.
- You need the infrastructure to do this.
- You must have the infrastructure to do this.
- If you have a non-isolated test environment, you can break both new and old
versions.

Expand Down
2 changes: 1 addition & 1 deletion applications/operator_sdk/osdk-generating-csvs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ A _ClusterServiceVersion_ (CSV) is a YAML manifest created from Operator
metadata that assists the Operator Lifecycle Manager (OLM) in running the
Operator in a cluster. It is the metadata that accompanies an Operator container
image, used to populate user interfaces with information like its logo,
description, and version. It is also a source of technical information needed to
description, and version. It is also a source of technical information that is required to
run the Operator, like the RBAC rules it requires and which Custom Resources
(CRs) it manages or depends on.

Expand Down
2 changes: 1 addition & 1 deletion applications/operators/olm-what-operators-are.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Kubernetes application.

A Kubernetes application is an app that is both deployed on Kubernetes and
managed using the Kubernetes APIs and `kubectl` or `oc` tooling. To be able to
make the most of Kubernetes, you need a set of cohesive APIs to extend in order
make the most of Kubernetes, you require a set of cohesive APIs to extend in order
to service and manage your apps that run on Kubernetes. Think of
Operators as the runtime that manages this type of app on Kubernetes.

Expand Down
2 changes: 1 addition & 1 deletion applications/pruning-objects.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ cluster's etcd data store through normal user operations, such as when building
and deploying applications.

Cluster administrators can periodically prune older versions of objects from the
cluster that are no longer needed. For example, by pruning images you can delete
cluster that are no longer required. For example, by pruning images you can delete
older images and layers that are no longer in use, but are still taking up disk
space.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

You can uninstall the {asb-name} if you no longer need access to the service
You can uninstall the {asb-name} if you no longer require access to the service
bundles that it provides.

[IMPORTANT]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ include::modules/common-attributes.adoc[]

toc::[]

You can uninstall the {tsb-name} if you no longer need access to the template
You can uninstall the {tsb-name} if you no longer require access to the template
applications that it provides.

[IMPORTANT]
Expand Down
4 changes: 2 additions & 2 deletions architecture/understanding-development.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -35,11 +35,11 @@ There are many ways to approach application development with containers. The goa
* Creating a Kubernetes manifest and saving it to a git repository
* Making an Operator to share your application with others

Although we are illustrating a particular path from a simple container to an enterprise-ready application, along the way you will see options you have to incorporate different tools and methods, as well as reasons why you might want to choose those other options.
Although we are illustrating a particular path from a simple container to an enterprise-ready application, along the way you will see options you have to incorporate different tools and methods, as well as reasons why you might want to choose those other options.

include::modules/building-simple-container.adoc[leveloffset=+1]
include::modules/choosing-container-build-tools.adoc[leveloffset=+2]
include::modules/choosing-base-image.adoc[leveloffset=+2]
include::modules/choosing-registry.adoc[leveloffset=+2]
include::modules/creating-kubernetes-manifest-openshift.adoc[leveloffset=+1]
include::modules/develop-for-operators.adoc[leveloffset=+1]
include::modules/develop-for-operators.adoc[leveloffset=+1]
2 changes: 1 addition & 1 deletion builds/custom-builds-buildah.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ nodes. This means the _mount docker socket_ option of a custom build is not
guaranteed to provide an accessible Docker socket for use within a custom build
image.

If you need this capability in order to build and push images, add the Buildah
If you require this capability in order to build and push images, add the Buildah
tool your custom build image and use it to build and push the image within your
custom build logic. The following is an example of how to run custom builds with
Buildah.
Expand Down
2 changes: 1 addition & 1 deletion contributing_to_docs/contributing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The
https://github.com/orgs/openshift/teams/team-documentation[documentation team]
reviews the PR and arranges further review by the development and quality
assurance teams, as required.
If the PR needs changes, updates, or corrections required, we will let you know
If the PR requires changes, updates, or corrections required, we will let you know
in the PR. We might request that you make the changes or let you know that we
incorporated your content in a different PR. When the PR has been reviewed, all
all updates are complete, and all commits are squashed, we'll merge your PR and
Expand Down
2 changes: 1 addition & 1 deletion contributing_to_docs/doc_guidelines.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ And for the `openshift-origin` distro:

Considering that we use distinct branches to keep content for product versions
separated, global use of `{product-version}` across all branches is probably
less useful, but it is available if you come across a need for it. Just consider
less useful, but it is available if you come across a requirement for it. Just consider
how it will render across any branches that the content appears in.

If it makes more sense in context to refer to the major version of the product
Expand Down
4 changes: 2 additions & 2 deletions contributing_to_docs/term_glossary.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,8 +25,8 @@ architecture.

[NOTE]
====
If you want to add terms or other content to this document, or if anything needs
to be fixed, send an email to openshift-docs@redhat.com or submit a PR
If you want to add terms or other content to this document, or if anything must
be fixed, send an email to openshift-docs@redhat.com or submit a PR
on GitHub.
====

Expand Down
4 changes: 2 additions & 2 deletions contributing_to_docs/tools_and_setup.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ documentation is created in AsciiDoc, and is processed with http://asciibinder.o
which is an http://asciidoctor.org/[AsciiDoctor]-based docs management system.


=== What you need
=== What you require
The following are minimum requirements:

* A bash shell environment (Linux and OS X include a bash shell environment out
Expand All @@ -82,7 +82,7 @@ live content editing on a Fedora Linux system.

NOTE: If you already have AsciiBinder installed, you might be due for an update.
These directions assume that you are using AsciiBinder 0.1.15 or newer. To check
and update if necessary, simply run `gem update ascii_binder`. Note that you might need root permissions.
and update if necessary, simply run `gem update ascii_binder`. Note that you might require root permissions.

=== Building the collection
With the initial setup complete, you are ready to build the collection.
Expand Down
2 changes: 1 addition & 1 deletion logging/config/efk-logging-curator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ which incorporates the Curator configuration file, *_curator5.yaml_* and an {pro

{product-title} uses the *_config.yaml_* internally to generate the Curator link:https://www.elastic.co/guide/en/elasticsearch/client/curator/5.2/actionfile.html[`action` file].

Optionally, you can use the `action` file, directly. Editing this file allows you to use any action that Curator has available to it to be run periodically. However, this is only recommended for advanced users as modifying the file can be destructive to the cluster and can cause removal of required indices/settings from Elasticsearch. Most users only need to modify the Curator configuration map and never edit the `action` file.
Optionally, you can use the `action` file, directly. Editing this file allows you to use any action that Curator has available to it to be run periodically. However, this is only recommended for advanced users as modifying the file can be destructive to the cluster and can cause removal of required indices/settings from Elasticsearch. Most users only must modify the Curator configuration map and never edit the `action` file.

// The following include statements pull in the module files that comprise
// the assembly. Include any combination of concept, procedure, or reference
Expand Down
6 changes: 3 additions & 3 deletions logging/config/efk-logging-management.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@ The Cluster Logging Operator and Elasticsearch Operator can be in a _Managed_ or

In managed state, the Cluster Logging Operator (CLO) responds to changes in the Cluster Logging Custom Resource (CR) and attempts to update the cluster to match the CR.

In order to modify certain components managed by the Cluster Logging Operator or the Elasticsearch Operator, you must set the operator to the _unmanaged_ state.
In order to modify certain components managed by the Cluster Logging Operator or the Elasticsearch Operator, you must set the operator to the _unmanaged_ state.

In Unmanaged state, the operators do not respond to changes in the CRs. The administrator assumes full control of individual component configurations and upgrades when in unmanaged state.
In Unmanaged state, the operators do not respond to changes in the CRs. The administrator assumes full control of individual component configurations and upgrades when in unmanaged state.

The {product-title} documentation indicates in a prerequisite step when you need to set the cluster to Unmanaged.
The {product-title} documentation indicates in a prerequisite step when you must set the cluster to Unmanaged.

[NOTE]
====
Expand Down
2 changes: 1 addition & 1 deletion modules/building-simple-container.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
[id="building-simple-container_{context}"]
= Building a simple container

You have an idea for an application and you want to containerize it. All you need to get started is a tool for building a container (buildah or docker) and a file that describes what will go into your container (typically, a https://docs.docker.com/engine/reference/builder/[Dockerfile]). Next you will want a place to push the resulting container image (a container registry) so you can pull it to run anywhere you want it to run.
You have an idea for an application and you want to containerize it. All you must have to get started is a tool for building a container (buildah or docker) and a file that describes what will go into your container (typically, a https://docs.docker.com/engine/reference/builder/[Dockerfile]). Next you will want a place to push the resulting container image (a container registry) so you can pull it to run anywhere you want it to run.

Some examples of each of those components just described come with most Linux systems, except for the Dockerfile which you provide yourself. The following diagram shows what the process of building and pushing an image entails:

Expand Down
2 changes: 1 addition & 1 deletion modules/cli-developer-advanced.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ $ oc patch node/node1 -p '{"spec":{"unschedulable":true}}'

[NOTE]
====
If you need to patch a Custom Resource Definition, you must include the
If you must patch a Custom Resource Definition, you must include the
`--type merge` option in the command.
====

Expand Down
2 changes: 1 addition & 1 deletion modules/completing-installation.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[id="completing-installation_{context}"]
= Completing and verifying the {product-title} installation

When the bootstrap node is done with its work and has handed off control to the new {product-title} cluster, the bootstrap node is destroyed. The installer waits for the cluster to initialize, creates a route to the {product-title} console, and presents the information and credentials you need to log into the cluster. Here’s an example:
When the bootstrap node is done with its work and has handed off control to the new {product-title} cluster, the bootstrap node is destroyed. The installer waits for the cluster to initialize, creates a route to the {product-title} console, and presents the information and credentials you require to log into the cluster. Here’s an example:

----
INFO Install complete!                                
Expand Down
8 changes: 4 additions & 4 deletions modules/efk-logging-curator-actions.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@
[id="efk-logging-curator-actions_{context}"]
= Using the Curator Action file

The *Curator* ConfigMap in the `openshift-logging` project includes a link:https://www.elastic.co/guide/en/elasticsearch/client/curator/5.2/actionfile.html[Curator *action* file] where you configure any Curator action to be run periodically.
The *Curator* ConfigMap in the `openshift-logging` project includes a link:https://www.elastic.co/guide/en/elasticsearch/client/curator/5.2/actionfile.html[Curator *action* file] where you configure any Curator action to be run periodically.

However, when you use the *action* file, {product-title} ignores the `config.yaml` section of the *curator* ConfigMap, which is configured to ensure important internal indices do not get deleted by mistake. In order to use the *action* file, you should add an exclude rule to your configuration to retain these indices. You also need to manually add all the other patterns following the steps in this topic.
However, when you use the *action* file, {product-title} ignores the `config.yaml` section of the *curator* ConfigMap, which is configured to ensure important internal indices do not get deleted by mistake. In order to use the *action* file, you should add an exclude rule to your configuration to retain these indices. You also must manually add all the other patterns following the steps in this topic.

[IMPORTANT]
====
Expand All @@ -23,7 +23,7 @@ Using the *action* file is recommended only for advanced users as using this fil

.Procedure

To configure Curator to delete indices:
To configure Curator to delete indices:

. Edit the Curator ConfigMap:
+
Expand Down Expand Up @@ -68,6 +68,6 @@ actions:
exclude: False
----
<1> Specify `delete_indices` to delete the specified index.
<2> Use the `filers` parameters to specify the index to be deleted. See the link:https://www.elastic.co/guide/en/elasticsearch/client/curator/5.2/filters.html[Elastic Search curator documentation] for information on these parameters.
<2> Use the `filers` parameters to specify the index to be deleted. See the link:https://www.elastic.co/guide/en/elasticsearch/client/curator/5.2/filters.html[Elastic Search curator documentation] for information on these parameters.
<3> Specify `false` to allow the index to be deleted.

2 changes: 1 addition & 1 deletion modules/efk-logging-curator-scripted.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
[id="efk-logging-curator-scripted_{context}"]
= Configuring Curator in scripted deployments

Use the information in this section if you need to configure Curator in scripted deployments.
Use the information in this section if you must configure Curator in scripted deployments.

.Prerequisites

Expand Down
30 changes: 15 additions & 15 deletions modules/efk-logging-deploying-about.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,20 +5,20 @@
[id="efk-logging-deploying-about_{context}"]
= About deploying and configuring cluster logging

{product-title} cluster logging is designed to be used with the default configuration, which is tuned for small to medium sized {product-title} clusters.
{product-title} cluster logging is designed to be used with the default configuration, which is tuned for small to medium sized {product-title} clusters.

The installation instructions that follow include a sample Cluster Logging Custom Resource (CR), which you can use to create a cluster logging instance
The installation instructions that follow include a sample Cluster Logging Custom Resource (CR), which you can use to create a cluster logging instance
and configure your cluster logging deployment.

If you want to use the default cluster logging install, you can use the sample CR directly.
If you want to use the default cluster logging install, you can use the sample CR directly.

If you want to customize your deployment, make changes to the sample CR as needed. The following describes the configurations you can make when installing your cluster logging instance or modify after installtion. See the Configuring sections for more information on working with each component, including modifications you can make outside of the Cluster Logging Custom Resource.

[id="efk-logging-deploy-about-config_{context}"]
== Configuring and Tuning Cluster Logging

You can configure your cluster logging environment by modifying the Cluster Logging Custom Resource deployed
in the `openshift-logging` project.
in the `openshift-logging` project.

You can modify any of the following components upon install or after install

Expand All @@ -27,24 +27,24 @@ The Cluster Logging Operator and Elasticsearch Operator can be in a _Managed_ or

In managed state, the Cluster Logging Operator (CLO) responds to changes in the Cluster Logging Custom Resource (CR) and attempts to update the cluster to match the CR.

In order to modify certain components managed by the Cluster Logging Operator or the Elasticsearch Operator, you must set the operator to the _unmanaged_ state.
In order to modify certain components managed by the Cluster Logging Operator or the Elasticsearch Operator, you must set the operator to the _unmanaged_ state.

In Unmanaged state, the operators do not respond to changes in the CRs. The administrator assumes full control of individual component configurations and upgrades when in unmanaged state.
In Unmanaged state, the operators do not respond to changes in the CRs. The administrator assumes full control of individual component configurations and upgrades when in unmanaged state.

[NOTE]
====
The {product-title} documentation indicates in a prerequisite step when you need to set the cluster to Unmanaged.
The {product-title} documentation indicates in a prerequisite step when you must set the cluster to Unmanaged.
====

----
spec:
managementState: "Managed"
----

The {product-title} documentation indicates in a prerequisite step when you need to set the cluster to Unmanaged.
The {product-title} documentation indicates in a prerequisite step when you must set the cluster to Unmanaged.

[IMPORTANT]
====
====
An unmanaged deployment will not receive updates until the `ClusterLogging` custom resource is placed back into a managed state.
====

Expand Down Expand Up @@ -97,7 +97,7 @@ spec:
----

Elasticsearch storage::
You can configure a persistent storage class and size for the Elasticsearch cluster using the `storageClass` `name` and `size` parameters. The Cluster Logging Operator creates a `PersistentVolumeClaim` for each data node in the Elasticsearch cluster based on these parameters.
You can configure a persistent storage class and size for the Elasticsearch cluster using the `storageClass` `name` and `size` parameters. The Cluster Logging Operator creates a `PersistentVolumeClaim` for each data node in the Elasticsearch cluster based on these parameters.

----
spec:
Expand All @@ -106,12 +106,12 @@ You can configure a persistent storage class and size for the Elasticsearch clus
elasticsearch:
nodeCount: 3
storage:
storageClass:
storageClass:
name: "gp2"
size: "200G"
----

This example specifies each data node in the cluster will be bound to a `PersistentVolumeClaim` that
This example specifies each data node in the cluster will be bound to a `PersistentVolumeClaim` that
requests "200G" of "gp2" storage. Each primary shard will be backed by a single replica.

[NOTE]
Expand All @@ -138,9 +138,9 @@ You can set the policy that defines how Elasticsearch shards are replicated acro

////
Log collectors::
You can select which log collector is deployed as a Daemonset to each node in the {product-title} cluster, either:
* Fluentd - The default log collector based on Fluentd.
You can select which log collector is deployed as a Daemonset to each node in the {product-title} cluster, either:
* Fluentd - The default log collector based on Fluentd.
* Rsyslog - Alternate log collector supported as **Tech Preview** only.
----
Expand Down
2 changes: 1 addition & 1 deletion modules/installation-aws-user-infra-requirements.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ If `m4` instance types are not available in your region, such as with

.Required VPC components

You need to provide a suitable VPC and subnets that allow communication to your
You must provide a suitable VPC and subnets that allow communication to your
machines.

[cols="2a,7a,3a,3a",options="header"]
Expand Down
2 changes: 1 addition & 1 deletion modules/installation-bootstrap-gather.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ installation.
[NOTE]
====
You use a different command to gather logs about an unsuccessful installation
than to gather logs from a running cluster. If you need to gather logs from a
than to gather logs from a running cluster. If you must gather logs from a
running cluster, use the `oc adm must-gather` command.
====

Expand Down
Loading

0 comments on commit 75e37cb

Please sign in to comment.