Skip to content

Commit

Permalink
Merge pull request openshift#10363 from kalexand-rh/build_warnings
Browse files Browse the repository at this point in the history
fixing build warnings
  • Loading branch information
kalexand-rh authored Jun 26, 2018
2 parents 47e54f7 + 36530fe commit e6af5e2
Show file tree
Hide file tree
Showing 16 changed files with 48 additions and 45 deletions.
4 changes: 2 additions & 2 deletions admin_guide/diagnostics_tool.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -393,10 +393,10 @@ when running the container as a non-root user.
<3> Change *_/etc/ansible/hosts_* to the location of your cluster's inventory file,
if different. This file will be bind-mounted to *_/tmp/inventory_*, which is
used according to the `INVENTORY_FILE` environment variable in the container.
<3> The `PLAYBOOK_FILE` environment variable is set to the location of the
<4> The `PLAYBOOK_FILE` environment variable is set to the location of the
*_health.yml_* playbook relative to *_/usr/share/ansible/openshift-ansible_*
inside the container.
<4> Set any variables desired for a single run with the `-e key=value` format.
<5> Set any variables desired for a single run with the `-e key=value` format.

In the above command, the SSH key is mounted with the `:Z` flag so that the
container can read the SSH key from its restricted SELinux context; this means
Expand Down
10 changes: 6 additions & 4 deletions admin_guide/scheduling/node_affinity.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ Required rules *must* be met before a pod can be scheduled on a node. Preferred
If labels on a node change at runtime that results in an node affinity rule on a pod no longer being met, the pod continues to run on the node.
====

[[admin-guide-configuring-affinity]]
== Configuring Node Affinity

You configure node affinity through the pod specification file. You can specify a xref:admin-guide-sched-affinity-config-req[required rule], a xref:admin-guide-sched-affinity-config-pref[preferred rule], or both. If you specify both, the node must first meet the required rule, then attempts to meet the preferred rule.
Expand Down Expand Up @@ -89,7 +90,9 @@ spec:
<2> Defines a preferred rule.
<3> Specifies a weight for a preferred rule. The node with highest weight is preferred.
<4> The key/value pair (label) that must be matched to apply the rule.
<4> The operator represents the relationship between the label on the node and the set of values in the `matchExpression` parameters in the pod specification. This value can be `In`, `NotIn`, `Exists`, or `DoesNotExist`, `Lt`, or `Gt`.
<5> The operator represents the relationship between the label on the node and
the set of values in the `matchExpression` parameters in the pod specification.
This value can be `In`, `NotIn`, `Exists`, or `DoesNotExist`, `Lt`, or `Gt`.

There is no explicit _node anti-affinity_ concept, but using the `NotIn` or `DoesNotExist` operator replicates that behavior.

Expand Down Expand Up @@ -193,7 +196,7 @@ $ oc create -f e2e-az3.yaml
The following examples demonstrate node affinity.

[[admin-guide-sched-affinity-examples1]]
===== Node Affinity with Matching Labels
=== Node Affinity with Matching Labels

The following example demonstrates node affinity for a node and pod with matching labels:

Expand Down Expand Up @@ -242,7 +245,7 @@ pod-s1 1/1 Running 0 4m IP1 node1
----

[[admin-guide-sched-affinity-examples2]]
===== Node Affinity with No Matching Labels
=== Node Affinity with No Matching Labels

The following example demonstrates node affinity for a node and pod without matching labels:

Expand Down Expand Up @@ -285,4 +288,3 @@ Events:
--------- -------- ----- ---- ------------- -------- ------
1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).
----

8 changes: 4 additions & 4 deletions admin_guide/scheduling/pod_affinity.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ spec:
<2> Defines a preferred rule.
<3> Specifies a weight for a preferred rule. The node that with highest weight is preferred.
<4> The key and value (label) that must be matched to apply the rule.
<4> The operator represents the relationship between the label on the existing pod and the set of values in the `matchExpression` parameters in the specification for the new pod. Can be `In`, `NotIn`, `Exists`, or `DoesNotExist`.
<5> The operator represents the relationship between the label on the existing pod and the set of values in the `matchExpression` parameters in the specification for the new pod. Can be `In`, `NotIn`, `Exists`, or `DoesNotExist`.


[NOTE]
Expand Down Expand Up @@ -216,7 +216,7 @@ $ oc create -f <pod-spec>.yaml
The following examples demonstrate pod affinity and pod anti-affinity.

[[admin-guide-sched-affinity-examples1-pods]]
===== Pod Affinity
=== Pod Affinity

The following example demonstrates pod affinity for pods with matching labels and label selectors.

Expand Down Expand Up @@ -266,7 +266,7 @@ spec:


[[admin-guide-sched-affinity-examples2-pods]]
===== Pod Anti-affinity
=== Pod Anti-affinity

The following example demonstrates pod anti-affinity for pods with matching labels and label selectors.

Expand Down Expand Up @@ -320,7 +320,7 @@ pod-s2 0/1 Pending 0 32s <none>
----

[[admin-guide-sched-affinity-examples3-pods]]
===== Pod Affinity with no Matching Labels
=== Pod Affinity with no Matching Labels

The following example demonstrates pod affinity for pods without matching labels and label selectors.

Expand Down
2 changes: 1 addition & 1 deletion admin_guide/scheduling/pod_placement.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ admissionConfig:
configuration:
podNodeSelectorPluginConfig: <1>
clusterDefaultNodeSelector: "k3=v3" <2>
ns1: region=west,env=test,infra=fedora,os=fedora <2>
ns1: region=west,env=test,infra=fedora,os=fedora <3>
----
+
<1> Adds the *Pod Node Selector* admission controller plug-in.
Expand Down
4 changes: 2 additions & 2 deletions admin_guide/scheduling/scheduler.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -467,7 +467,7 @@ example, where nodes have their physical location or status defined by labels.
"labelsPresence":{
"labels":[
"<label>" <3>
presence: true/false
presence: true/false <4>
]
}
}
Expand Down Expand Up @@ -669,7 +669,7 @@ regardless of its value.
"labelsPresence":{
"labels":[
"<label>" <3>
presence: true/false
presence: true/false <4>
]
}
}
Expand Down
2 changes: 1 addition & 1 deletion admin_guide/scheduling/taints_tolerations.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ A toleration matches a taint:
** the `effect` parameters are the same.

[[discrete]]
===== Using Multiple Taints
=== Using Multiple Taints

You can put multiple taints on the same node and multiple tolerations on the same pod. {product-title} processes multiple taints and tolerations as follows:

Expand Down
10 changes: 5 additions & 5 deletions apb_devel/writing/getting_started.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ $ oc new-project getting-started
////

[[apb-devel-writing-gs-provision]]
==== Provision
=== Provision

During the `apb init` process, two parts of the provision task were stubbed out. The playbook, *_playbooks/provision.yml_*, and the associated role in *_roles/provision-my-test-apb_*:

Expand Down Expand Up @@ -323,7 +323,7 @@ localhost : ok=0 changed=0 unreachable=0 failed=0
----

[[apb-devel-writing-gs-provision-dc]]
===== Creating a Deploying Configuration
==== Creating a Deploying Configuration

At the minimum, your APB should deploy the application pods. You can do this by
specifying a
Expand Down Expand Up @@ -426,7 +426,7 @@ To clean up before moving on and allow you to provision again, you can delete th
====

[[apb-devel-writing-gs-provision-svc]]
===== Creating a Service
==== Creating a Service

You will want to use multiple pods, load balance them, and create a
xref:../../architecture/core_concepts/pods_and_services.adoc#services[service]
Expand Down Expand Up @@ -496,7 +496,7 @@ To clean up before moving on and allow you to provision again, you can delete th
====

[[apb-devel-writing-gs-provision-route]]
===== Creating a Route
==== Creating a Route

You can expose external access to your application through a reliable named
xref:../../architecture/networking/routes.adoc#architecture-core-concepts-routes[route]:
Expand Down Expand Up @@ -555,7 +555,7 @@ link:https://github.com/ansibleplaybookbundle/hello-world-apb[*hello-world-apb*]
example repository.

[[apb-devel-writing-gs-deprovision]]
==== Deprovision
=== Deprovision

For the deprovision task, you must destroy all provisioned resources, usually in
reverse order from how they were created.
Expand Down
2 changes: 1 addition & 1 deletion dev_guide/expose_service/expose_internal_ip_service.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ oadm policy add-cluster-role-to-user cluster-admin username

[[defining_ip_range]]
// tag::expose-svc-define-ip[]
==== Defining the Public IP Range
=== Defining the Public IP Range

// http://playbooks-rhtconsulting.rhcloud.com/playbooks/operationalizing/ingress.html

Expand Down
4 changes: 2 additions & 2 deletions dev_guide/integrating_external_services.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -245,7 +245,7 @@ The following steps outline a scenario for integrating with an external SaaS
provider:

[[saas-define-service-using-ip-address]]
==== Using an IP address and Endpoints
=== Using an IP address and Endpoints

. Create an xref:../architecture/core_concepts/pods_and_services.adoc#services[{product-title} service] to represent the external service. This is similar to creating an internal service; however the difference is in the service's `Selector` field.
+
Expand Down Expand Up @@ -368,7 +368,7 @@ The application reads the coordinates and credentials for the service from the
environment and establishes a connection with the service.

[[saas-define-service-using-fqdn]]
==== Using an External Domain Name
=== Using an External Domain Name

`ExternalName` services do not have selectors, or any defined ports or
endpoints. You can use an `ExternalName` service to assign traffic to an
Expand Down
4 changes: 2 additions & 2 deletions dev_guide/pod_autoscaling.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ Current CPU utilization: 79% <2>
Min replicas: 1 <3>
Max replicas: 4 <4>
ReplicationController pods: 1 current / 1 desired
Conditions: <4>
Conditions: <5>
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
Expand All @@ -292,7 +292,7 @@ Events:
<2> The current CPU utilization across all pods controlled by the deployment configuration.
<3> The minimum number of replicas to scale down to.
<4> The maximum number of replicas to scale up to.
<4> If the object used the `v2alpha1` API, xref:viewing-a-hpa-status[status conditions] are displayed.
<5> If the object used the `v2alpha1` API, xref:viewing-a-hpa-status[status conditions] are displayed.

[[viewing-a-hpa-status]]
=== Viewing Horizontal Pod Autoscaler Status Conditions
Expand Down
18 changes: 9 additions & 9 deletions go_client/connecting_to_the_cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ xref:getting_started.adoc#go-client-getting-started[Getting Started].

In particular, the example shows:

1. Instantiating a loader for the kubeconfig file:
* Instantiating a loader for the kubeconfig file:
+
[source, go]
----
Expand All @@ -34,23 +34,23 @@ kubeconfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
)
----

1. Determining the namespace referenced by the current context in the kubeconfig
* Determining the namespace referenced by the current context in the kubeconfig
file:
+
[source, go]
----
namespace, _, err := kubeconfig.Namespace()
----

1. Getting a rest.Config from the kubeconfig file. This is passed into all the
* Getting a rest.Config from the kubeconfig file. This is passed into all the
client objects created:
+
[source, go]
----
restconfig, err := kubeconfig.ClientConfig()
----

1. Creating clients from the rest.Config:
* Creating clients from the rest.Config:
+
[source, go]
----
Expand Down Expand Up @@ -82,7 +82,7 @@ import (
func main() {
// Build a rest.Config from configuration injected into the Pod by
// Kubernetes. Clients will use the Pod's ServiceAccount principal.
// Kubernetes. Clients will use the Pod's ServiceAccount principal.
restconfig, err := rest.InClusterConfig()
if err != nil {
panic(err)
Expand Down Expand Up @@ -146,12 +146,12 @@ func main() {

Note: to try out the above example, you will need to ensure:

1. the Pod's ServiceAccount (called "default" by default) has permissions to
list Pods and Builds. One way to achieve this is by running `oc policy
* The Pod's ServiceAccount (called "default" by default) has permissions to
list Pods and Builds. One way to achieve this is by running `oc policy
add-role-to-user view -z default`.

1. the downward API is used to pass the Pod's Namespace into an environment
variable, so that it can be picked up by the application. The following Pod
* The downward API is used to pass the Pod's Namespace into an environment
variable so that it can be picked up by the application. The following Pod
spec achieves this:
+
[source, yaml]
Expand Down
2 changes: 1 addition & 1 deletion install_config/configuring_authentication.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -1432,7 +1432,7 @@ oauthConfig:
organizations: <7>
- myorganization1
- myorganization2
teams: <7>
teams: <8>
- myorganization1/team-a
- myorganization2/team-b
----
Expand Down
2 changes: 1 addition & 1 deletion install_config/configuring_aws.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ security groups.
include::install_config/topics/configuring_a_security_group.adoc[]

[[overriding-detected-ip-addresses-host-names-aws]]
==== Overriding Detected IP Addresses and Host Names
=== Overriding Detected IP Addresses and Host Names
In AWS, situations that require overriding the variables include:

[cols="1,2"options="header"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,7 @@ parameters:
type: pd-standard <1>
zone: us-central1-a <2>
zones: us-central1-a, us-central1-b, us-east1-b <3>
fsType: ext4 <3>
fsType: ext4 <4>
----
<1> Select either `pd-standard` or `pd-ssd`. The default is `pd-ssd`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ $ mkdir -p /mnt/glusterfs/myVol1
$ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1

$ ls -lnZ /mnt/glusterfs/
drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0 myVol1
drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0 myVol1 <1> <2>
----
<1> The UID is 592.
<2> The GID is 590.
Expand Down
17 changes: 9 additions & 8 deletions install_config/storage_examples/ceph_rbd_dynamic_example.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -207,9 +207,9 @@ spec:
rather act as labels to match a PV to a PVC.
<2> This claim will look for PVs offering `2Gi` or greater capacity.

. Save the PVC definition to a file, for example *_ceph-claim.yaml_*,

Save the PVC definition to a file, for example *_ceph-claim.yaml_*,
and create the PVC:
+
--
----
# oc create -f ceph-claim.yaml
Expand All @@ -219,9 +219,11 @@ persistentvolumeclaim "ceph-claim" created
# oc get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
ceph-claim Bound pvc-f548d663-3cac-11e7-9937-0024e8650c7a 2Gi RWO 1m
<1>
----
<1> the claim dynamically created a Ceph RBD PV.
[NOTE]
====
The claim dynamically created a Ceph RBD PV.
====
--

[[ceph-rbd-dynamic-example-creating-the-pod]]
Expand Down Expand Up @@ -250,7 +252,7 @@ spec:
volumes:
- name: ceph-vol1 <3>
persistentVolumeClaim:
claimName: ceph-claim <5>
claimName: ceph-claim <5>
----
<1> The name of this pod as displayed by `oc get pod`.
<2> The image run by this pod. In this case, we are telling `busybox` to sleep.
Expand All @@ -269,9 +271,8 @@ pod "ceph-pod1" created
#verify pod was created
# oc get pod
NAME READY STATUS RESTARTS AGE
ceph-pod1 1/1 Running 0 2m
<1>
NAME READY STATUS RESTARTS AGE
ceph-pod1 1/1 Running 0 2m <1>
----
<1> After approximately a minute, the pod will be in the `Running` state.
--
Expand Down

0 comments on commit e6af5e2

Please sign in to comment.