diff --git a/admin_guide/admin_install_openshift.adoc b/admin_guide/admin_install_openshift.adoc index 5887f9115c0e..e6898bf8ac61 100644 --- a/admin_guide/admin_install_openshift.adoc +++ b/admin_guide/admin_install_openshift.adoc @@ -8,3 +8,8 @@ :toc-title: toc::[] + +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/admin_guide/manage_nodes.adoc b/admin_guide/manage_nodes.adoc index 79124c719b27..e14e1aeeadd8 100644 --- a/admin_guide/manage_nodes.adoc +++ b/admin_guide/manage_nodes.adoc @@ -11,13 +11,13 @@ toc::[] == Overview As an OpenShift administrator, you can manage -link:../architecture/kubernetes_infrastructure.html#node[nodes] in your instance -using the link:cli.html[CLI]. +link:../architecture/infrastructure_components/kubernetes_infrastructure.html#node[nodes] +in your instance using the link:../cli_reference/overview.html[CLI]. When you perform node management operations, the CLI interacts with -link:../architecture/kubernetes_infrastructure.html#node[node objects] that are -representations of nodes. The master uses the information from node objects to -validate nodes with health checks. +link:../architecture/infrastructure_components/kubernetes_infrastructure.html#node[node +objects] that are representations of nodes. The master uses the information from +node objects to validate nodes with health checks. == Listing Nodes Use the following command to list all nodes that are known to your OpenShift diff --git a/admin_guide/overview.adoc b/admin_guide/overview.adoc index 1a0076466f89..8d97ce35d5eb 100644 --- a/admin_guide/overview.adoc +++ b/admin_guide/overview.adoc @@ -4,3 +4,8 @@ :data-uri: :icons: :experimental: + +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/architecture/additional_concepts/networking.adoc b/architecture/additional_concepts/networking.adoc index 24a5d677f5b4..7326b495cf6b 100644 --- a/architecture/additional_concepts/networking.adoc +++ b/architecture/additional_concepts/networking.adoc @@ -9,12 +9,26 @@ toc::[] -Kubernetes ensures that each pod is able to network with each other, and allocates each pod an IP address from an internal network. This ensures all containers within the pod behave as if they were on the same host. Giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. +Kubernetes ensures that each pod is able to network with each other, and +allocates each pod an IP address from an internal network. This ensures all +containers within the pod behave as if they were on the same host. Giving each +pod its own IP address means that pods can be treated like physical hosts or +virtual machines in terms of port allocation, networking, naming, service +discovery, load balancing, application configuration, and migration. -Creating links between pods is unnecessary. However, it is not recommended that you have a pod talk to another directly by using the IP address. Instead, we recommend that you create a link:kubernetes_model.html#service[service], then interact with the service. +Creating links between pods is unnecessary. However, it is not recommended that +you have a pod talk to another directly by using the IP address. Instead, we +recommend that you create a +link:../core_objects/kubernetes_model.html#service[service], then interact with +the service. == OpenShift SDN -OpenShift deploys a software-defined networking (SDN) approach for connecting Docker containers in an OpenShift cluster. The OpenShift SDN connects all containers across all node hosts. +OpenShift deploys a software-defined networking (SDN) approach for connecting +Docker containers in an OpenShift cluster. The OpenShift SDN connects all +containers across all node hosts. -For the OpenShift beta releases, the OpenShift SDN is available for manual setup. See link:https://github.com/openshift/openshift-sdn[the SDN solution documentation] for more information. OpenShift SDN will be incorporated into the Ansible-based installation procedure in future versions. +For the OpenShift beta releases, the OpenShift SDN is available for manual +setup. See link:https://github.com/openshift/openshift-sdn[the SDN solution +documentation] for more information. OpenShift SDN will be incorporated into the +Ansible-based installation procedure in future versions. diff --git a/architecture/additional_concepts/overview.adoc b/architecture/additional_concepts/overview.adoc index 546a30882a75..8d97ce35d5eb 100644 --- a/architecture/additional_concepts/overview.adoc +++ b/architecture/additional_concepts/overview.adoc @@ -5,3 +5,7 @@ :icons: :experimental: +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/architecture/additional_concepts/port_forwarding.adoc b/architecture/additional_concepts/port_forwarding.adoc index 1835fef93573..0d74481eb757 100644 --- a/architecture/additional_concepts/port_forwarding.adoc +++ b/architecture/additional_concepts/port_forwarding.adoc @@ -15,7 +15,10 @@ forwarding to pods. This is implemented using HTTP along with a multiplexed streaming protocol such as link:http://www.chromium.org/spdy[*SPDY*] or link:https://http2.github.io/[*HTTP/2*]. -Developers can link:../../dev_guide/port_forwarding.html[use the CLI] to port forward to a pod. The CLI listens on each local port specified by the user, forwarding via the link:../using_openshift/port_forwarding.html#protocol[described protocol]. +Developers can link:../../dev_guide/port_forwarding.html[use the CLI] to port +forward to a pod. The CLI listens on each local port specified by the user, +forwarding via the link:../../dev_guide/port_forwarding.html#protocol[described +protocol]. == Server Operation The Kubelet handles port forward requests from clients. Upon receiving a diff --git a/architecture/additional_concepts/remote_commands.adoc b/architecture/additional_concepts/remote_commands.adoc index 23a1c8fbdaef..019dca93a6b1 100644 --- a/architecture/additional_concepts/remote_commands.adoc +++ b/architecture/additional_concepts/remote_commands.adoc @@ -15,7 +15,8 @@ executing commands in containers. This is implemented using HTTP along with a multiplexed streaming protocol such as link:http://www.chromium.org/spdy[*SPDY*] or link:https://http2.github.io/[*HTTP/2*]. -Developers can link:../../dev_guide/executing_remote_commands.html[use the CLI] to execute remote commands in containers. +Developers can link:../../dev_guide/executing_remote_commands.html[use the CLI] +to execute remote commands in containers. == Server Operation The Kubelet handles remote execution requests from clients. Upon receiving a diff --git a/architecture/core_objects/builds.adoc b/architecture/core_objects/builds.adoc index 61d60f64ebb4..793f35fb9975 100644 --- a/architecture/core_objects/builds.adoc +++ b/architecture/core_objects/builds.adoc @@ -15,14 +15,24 @@ A build is a process of transforming input parameters, typically source code, in == BuildConfig The `BuildConfig` object is the definition of the entire build process. It consists of the following elements: -* _triggers_: Define policies used for automatically invoking builds. -** _GitHub webhooks_: GitHub specific webhooks that specify which repository changes, such as a new commit, should invoke a new build. This trigger is specific to the GitHub API. -** _generic webhooks_: Similar to GitHub webhooks in that they invoke a new build whenever it gets a notification. The difference is its payload is slightly different than GitHub's. -** _image change_: Defines a trigger which is invoked upon availability of a new image in the specified ImageRepository. -* _parameters_ -** _source_: Describes the SCM used to locate the sources. Currently only supports Git. -** _strategy_: Describes which build type is invoked along with build type specific details. -** _output_: Describes the resulting image name, tag, and registry to which the image should be pushed. +[horizontal] +triggers:: Define policies used for automatically invoking builds: +GitHub webhooks::: GitHub specific webhooks that specify which repository +changes, such as a new commit, should invoke a new build. This trigger is +specific to the GitHub API. +generic webhooks::: Similar to GitHub webhooks in that they invoke a new build +whenever it gets a notification. The difference is its payload is slightly +different than GitHub's. +image change::: Defines a trigger which is invoked upon availability of a new +image in the specified ImageRepository. + +parameters:: +source::: Describes the SCM used to locate the sources. Currently only supports +Git. +strategy::: Describes which build type is invoked along with build type specific +details. +output::: Describes the resulting image name, tag, and registry to which the +image should be pushed. There are three available link:openshift_model.html#build-strategies[build strategies]: @@ -38,7 +48,49 @@ Docker builds invoke the plain https://docs.docker.com/reference/commandline/cli [#sti-build] == STI Build -STI builds are a replacement for the OpenShift v2-like developer experience. The developer specifies the repository where their project is located and a builder image, which defines the language and framework used for writing their application. STI then assembles a new image which runs the application defined by the source using the framework defined by the builder image. +link:../../creating_images/sti.html[Source-to-image (STI)] is a tool for +building reproducible Docker images. It produces ready-to-run images by +injecting a user source into a docker image and assembling a new docker image. +The new image incorporates the base image and built source, and is ready to use +with the `docker run` command. STI supports incremental builds which re-use +previously downloaded dependencies, previously built artifacts, etc. + +*What are the goals of STI?* + +[horizontal] +Image flexibility:: STI allows you to use almost any existing Docker image as +the base for your application. STI scripts can be written to layer application +code onto almost any existing Docker image, so you can take advantage of the +existing ecosystem. Note that currently STI relies on `tar` and `untar` to +inject application source so the image needs to be able to process tarred +content. + +Speed:: Adding layers as part of a *_Dockerfile_* can be slow. With STI, the +assemble process can perform a large number of complex operations without +creating a new layer at each step. In addition, STI scripts can be written to +re-use dependencies stored in a previous version of the application image rather +than re-downloading them each time the build is run. + +Patchability:: If an underlying image needs to be patched due to a security +issue, OpenShift can use STI to rebuild your application on top of the patched +builder image. + +Operational efficiency:: By restricting build operations instead of allowing +arbitrary actions such as in a *_Dockerfile_*, the PaaS operator can avoid +accidental or intentional abuses of the build system. + +Operational security:: Allowing users to build arbitrary an *_Dockerfile_* +exposes the host system to root privilege escalation by a malicious user because +the entire docker build process is run as a user with docker privileges. STI +restricts the operations performed as a root user, and can run the scripts as an +individual user + +User efficiency:: STI prevents developers from falling into a trap of performing +arbitrary `yum install` type operations during their application build, which +would result in slow development iteration. + +Ecosystem:: Encourages a shared ecosystem of images with best practices you can +leverage for your applications. [#custom-build] == Custom Build @@ -46,33 +98,45 @@ Custom builds are the most sophisticated version of builds, allowing developers base Docker images. [#using-docker-credentials-for-pushing-images] -== Using Docker credentials for pushing images +== Using Docker Credentials for Pushing Images -In case you want to push the output image into private Docker Registry that -requires authentication or Docker Hub, you have to supply the `.dockercfg` file -with valid Docker Registry credentials. +In case you want to push the output image into private Docker registry that +requires authentication or Docker Hub, you must supply a `.dockercfg` file +with valid Docker registry credentials. -The `.dockercfg` JSON file usually exists in your home directory and it has following -format: +The *_.dockercfg_* JSON file usually exists in your home directory and has +following format: -``` -{"https://index.docker.io/v1/":{"auth":"encrypted_password","email":"foo@bar.com"}} -``` +==== -You can also add authentication entries to this file by running `docker login` -command. The file will be created if it does not exist. - -The 'https://index.docker.io/v1' is the URL of the registry. You can define -multiple Docker registries entries in this file. - -Kubernetes provides the https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/secrets.md[Secret] -resource, which you can use to store your passwords and configuration. -In order to make Build use your `.dockercfg` file for pushing the output image, -you have to create the Secret first. The 'data' field in Secret must contain the -'dockercfg' key with the value set to base64 encoded content of the '.dockercfg' -file. For example: - -``` +---- +{ + "https://index.docker.io/v1/": { <1> + "auth": "YWRfbGzhcGU6R2labnRib21ifTE=", <2> + "email": "foo@bar.com" <3> + } +---- + +<1> URL of the registry. +<2> Encrypted password. +<3> Email address for the login. +==== + +You can define multiple Docker registries entries in this file. You can also add +authentication entries to this file by running the `docker login` command. The +file will be created if it does not exist. + +Kubernetes provides the +https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/design/secrets.md[Secret] +resource, which you can use to store your passwords and configuration. You must +create the `*Secret*` first before builds can use your *_.dockercfg_* file for +pushing the output image. The `*data*` field for the `*Secret*` object must +contain the `*dockercfg*` key with the value set to the base64-encoded content +of the *_.dockercfg_* file. For example: + +==== + +---- { "apiVersion": "v1beta3", "kind": "Secret", @@ -84,9 +148,19 @@ file. For example: } } -``` +---- +==== + +To create the `*Secret*` from a *_secret.json_* file, for example, you can use +the following command: + +==== + +---- +$ osc create -f secret.json +---- +==== -To create the secret, you can use 'osc create -f secret.json'. Once you have -this secret created, you can add `PushSecretName` field into `Output` section -inside the BuildConfig and set it to the name of the Secret that you created (in -this case 'dockerhub'). +Once you have the `*Secret*` created, you can add a `PushSecretName` field into +`Output` section inside the `BuildConfig` and set it to the name of the +`*Secret*` that you created, in this case `*dockerhub*`. diff --git a/architecture/core_objects/openshift_model.adoc b/architecture/core_objects/openshift_model.adoc index f3e5fc94add3..448236b6ee57 100644 --- a/architecture/core_objects/openshift_model.adoc +++ b/architecture/core_objects/openshift_model.adoc @@ -10,39 +10,45 @@ toc::[] == Overview -OpenShift extends the base Kubernetes model to provide a more feature rich development lifecycle platform. +OpenShift extends the base Kubernetes model to provide a more feature rich +development lifecycle platform. == Build +A link:builds.html[build] is a process of transforming input parameters, +typically source code, into a resulting object, typically a runnable image. == BuildConfig +A link:builds.html#buildconfig[BuildConfig] object is the definition of the +entire link:builds.html[build] process. === Build Strategies -The OpenShift build system provides extensible support for build strategies based on selectable types specified in the build API. By default, two strategies are supported: Docker builds, and Source-to-Image builds. +The OpenShift build system provides extensible support for build strategies +based on selectable types specified in the build API. By default, two strategies +are supported: Docker builds, and Source-to-Image builds. -==== Docker build -OpenShift supports pure Docker builds. Using this strategy, users may supply a URL to a Docker context which is used as the basis for a https://docs.docker.com/reference/commandline/cli/#build[Docker build]. +*Docker Build* [[docker-build]] -==== Source-to-Image -Source-to-image (sti) is a tool for building reproducible Docker images. It produces ready-to-run images by injecting a user source into a docker image and assembling a new Docker image which incorporates the base image and built source, and is ready to use with `docker run`. STI supports incremental builds which re-use previously downloaded dependencies, previously built artifacts, etc. +OpenShift supports pure Docker builds. Using this strategy, users may supply a +URL to a Docker context which is used as the basis for a +https://docs.docker.com/reference/commandline/cli/#build[Docker build]. -===== So why would you want to use this? There were a few goals for STI. +*Source-to-Image (STI) Build* [[source-to-image]] -* Image flexibility: STI allows you to use almost any existing Docker image as the base for your application. STI scripts can be written to layer application code onto almost any existing Docker image, so you can take advantage of the existing ecosystem. (Why only “almost” all images? Currently STI relies on tar/untar to inject application source so the image needs to be able to process tarred content.) -* Speed: Adding layers as part of a Dockerfile can be slow. With STI the assemble process can perform a large number of complex operations without creating a new layer at each step. In addition, STI scripts can be written to re-use dependencies stored in a previous version of the application image rather than re-downloading them each time the build is run. -* Patchability: If an underlying image needs to be patched due to a security issue, OpenShift can use STI to rebuild your application on top of the patched builder image. -* Operational efficiency: By restricting build operations instead of allowing arbitrary actions such as in a Dockerfile, the PaaS operator can avoid accidental or intentional abuses of the build system. -* Operational security: Allowing users to build arbitrary Dockerfiles exposes the host system to root privilege escalation by a malicious user because the entire docker build process is run as a user with docker privileges. STI restricts the operations performed as a root user, and can run the scripts as an individual user -* User efficiency: STI prevents developers from falling into a trap of performing arbitrary “yum install” type operations during their application build, which would result in slow development iteration. -* Ecosystem: Encourages a shared ecosystem of images with best practices you can leverage for your applications. +link:builds.html#sti-build[STI builds] are a replacement for the OpenShift v2-like developer experience. The developer specifies the repository where their project is located and a builder image, which defines the language and framework used for writing their application. STI then assembles a new image which runs the application defined by the source using the framework defined by the builder image. + +*Custom Build* [[custom-build]] -==== Custom build The custom build strategy is very similar to *Docker build* strategy, but users might customize the builder image that will be used for build execution. The Docker build uses https://registry.hub.docker.com/u/openshift/docker-builder/[openshift/docker-builder] image by default. Using your own builder image allows you to customize your build process. == BuildLog +Logs from the containers where the build occured are accessible +link:../../dev_guide/builds.html#accessing-build-logs[using the CLI]. == Deployment +See link:../../dev_guide/deployments.html[Deployments]. == DeploymentConfig +See link:../../dev_guide/deployments.html[Deployments]. == Image OpenShift stores information about Docker images including the "pull spec" (what you'd use to pull the image) and complete metadata about the image (e.g. command, entrypoint, environment variables, etc.). Images in OpenShift are immutable. diff --git a/architecture/core_objects/overview.adoc b/architecture/core_objects/overview.adoc index 546a30882a75..8d97ce35d5eb 100644 --- a/architecture/core_objects/overview.adoc +++ b/architecture/core_objects/overview.adoc @@ -5,3 +5,7 @@ :icons: :experimental: +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/architecture/core_objects/routing.adoc b/architecture/core_objects/routing.adoc index 03bc90ac15ab..229fa62fdb21 100644 --- a/architecture/core_objects/routing.adoc +++ b/architecture/core_objects/routing.adoc @@ -32,7 +32,7 @@ calls to another system, such as *F5*. Other capabilities exist to load-balance a service within a cluster. These services are exposed via a configurable link relation between different services, and ensure a set of services can be available. -link:../using_openshift/deployments.html[Deployments] can use these services as +link:../../dev_guide/deployments.html[Deployments] can use these services as local proxies for each host, or reuse the shared routing infrastructure. As an OpenShift administrator, you can configure routers in your instance. This @@ -280,7 +280,7 @@ for a password upon starting and does not have a way to automate this process. To remove a passphrase from a keyfile, you can run: **** -`# openssl rsa -in _passwordProtectedKey.key_ -out _new.key_` +`# openssl rsa -in __ -out __` **** When creating a secure route, you must include your certificate files as a diff --git a/architecture/infrastructure_components/image_registry.adoc b/architecture/infrastructure_components/image_registry.adoc index 1ecea24d11eb..0f18d2d63aeb 100644 --- a/architecture/infrastructure_components/image_registry.adoc +++ b/architecture/infrastructure_components/image_registry.adoc @@ -10,19 +10,35 @@ toc::[] == Overview -OpenShift utilizes any server implementing the Docker registry API as a source of images, including the canonical Docker Hub, private registries run by third parties, and the integrated OpenShift registry. +OpenShift utilizes any server implementing the Docker registry API as a source +of images, including the canonical Docker Hub, private registries run by third +parties, and the integrated OpenShift registry. == Integrated OpenShift Registry -OpenShift provides an integrated Docker registry that adds the ability to provision new image repositories on the fly (this feature is still a work in progress). This allows users to automatically have a place for their builds to push the resulting images. +OpenShift provides an integrated Docker registry that adds the ability to +provision new image repositories on the fly (this feature is still a work in +progress). This allows users to automatically have a place for their builds to +push the resulting images. -Whenever a new image is pushed to the integrated registry, the registry notifies OpenShift about the new image, passing along all the information about it, such as the namespace, name, and image metadata. Different pieces of OpenShift react to new images, creating new link:builds.html[builds] and link:../using_openshift/deployments.html[deployments]. +Whenever a new image is pushed to the integrated registry, the registry notifies +OpenShift about the new image, passing along all the information about it, such +as the namespace, name, and image metadata. Different pieces of OpenShift react +to new images, creating new link:../core_objects/builds.html[builds] and +link:../../dev_guide/deployments.html[deployments]. == Third Party Registries -OpenShift can create containers using images from third party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift registry. If not, OpenShift can poll the other registries for changes to image repositories. When new images are detected, the same build and deployment reactions described above occur. +OpenShift can create containers using images from third party registries, but it +is unlikely that these registries offer the same image notification support as +the integrated OpenShift registry. If not, OpenShift can poll the other +registries for changes to image repositories. When new images are detected, the +same build and deployment reactions described above occur. NOTE: Polling is not implemented yet. === Authentication -OpenShift can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift to push and pull images to and from private repositories. +OpenShift can communicate with registries to access private image repositories +using credentials supplied by the user. This allows OpenShift to push and pull +images to and from private repositories. -See the link:authentication.html[Authentication] topic for more information. +See the link:../additional_concepts/authentication.html[Authentication] topic +for more information. diff --git a/architecture/infrastructure_components/kubernetes_infrastructure.adoc b/architecture/infrastructure_components/kubernetes_infrastructure.adoc index 59f2ab7ab14e..d5f819f6bb58 100644 --- a/architecture/infrastructure_components/kubernetes_infrastructure.adoc +++ b/architecture/infrastructure_components/kubernetes_infrastructure.adoc @@ -21,7 +21,7 @@ A Kubernetes cluster consists of a master and a set of nodes. The master is the host or hosts that contain the master components, including the API server, controller manager server, and *etcd*. The master manages link:#node[nodes] in its Kubernetes cluster and schedules -link:kubernetes_model.html#pod[pods] to run on nodes. +link:../core_objects/kubernetes_model.html#pod[pods] to run on nodes. [cols="1,4"] .Master Components @@ -75,7 +75,7 @@ https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/node.md#node- documentation] for more information on node management. As an OpenShift administrator, you can -link:../using_openshift/managing_nodes.html[manage nodes] in your instance using +link:../../admin_guide/manage_nodes.html[manage nodes] in your instance using the CLI. [[kubelet]] diff --git a/architecture/infrastructure_components/overview.adoc b/architecture/infrastructure_components/overview.adoc index 546a30882a75..8d97ce35d5eb 100644 --- a/architecture/infrastructure_components/overview.adoc +++ b/architecture/infrastructure_components/overview.adoc @@ -5,3 +5,7 @@ :icons: :experimental: +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/cli_reference/get_started_cli.adoc b/cli_reference/get_started_cli.adoc index 5b85f9b4bf09..92166d6e175f 100644 --- a/cli_reference/get_started_cli.adoc +++ b/cli_reference/get_started_cli.adoc @@ -10,13 +10,14 @@ toc::[] == Basic Setup and Login -The `osc login` command is the best way to set up and configure the OpenShift CLI, -and it serves as the entry point for most users. The interactive flow helps you -establish a session to an OpenShift server with the provided credentials. The -configuration is automatically saved and is then used for every subsequent +The `osc login` command is the best way to set up and configure the OpenShift +CLI, and it serves as the entry point for most users. The interactive flow helps +you establish a session to an OpenShift server with the provided credentials. +The configuration is automatically saved and is then used for every subsequent command. -The following example shows the interactive setup and login using the `osc login` command: +The following example shows the interactive setup and login using the `osc +login` command: .CLI Setup and Login ==== @@ -60,8 +61,8 @@ the `osc login` command. == CLI Configuration Files -A CLI configuration file permanently stores `osc` options and contains a series of -authentication mechanisms and server connection information associated with +A CLI configuration file permanently stores `osc` options and contains a series +of authentication mechanisms and server connection information associated with nicknames. As described in the previous section, the `osc login` command automatically @@ -114,8 +115,8 @@ configuration, if the following files exist or options are specified: . The *_.kubeconfig_* file in the current directory. . The *_.kubeconfig_* file inside the *_.kube_* directory in the user's home: `~/.kube/.kubeconfig` -You can easily link:setup_multiple_cli_profiles.html[configure and manage multiple CLI -profiles] using the `osc config` command. +You can easily link:setup_multiple_cli_profiles.html[configure and manage +multiple CLI profiles] using the `osc config` command. == Projects A link:../dev_guide/projects.html[project] in OpenShift is the package that @@ -128,11 +129,13 @@ link:../dev_guide/projects.html[project]. The `osc login` selects a default project during link:#basic-setup-and-login[initial setup] to be used with subsequent commands. Use the following command to display the project currently in use: + **** `$ osc project` **** -If you have access to multiple projects, use the following syntax to switch to a particular project by specifying the project name: +If you have access to multiple projects, use the following syntax to switch to a +particular project by specifying the project name: **** `$ osc project __` diff --git a/cli_reference/overview.adoc b/cli_reference/overview.adoc index 8b94475d0045..2c8f2145a0a2 100644 --- a/cli_reference/overview.adoc +++ b/cli_reference/overview.adoc @@ -5,22 +5,21 @@ :icons: :experimental: -With the OpenShift command line interface (CLI), you can create and manage OpenShift projects from a terminal. The CLI is ideal in situations where you are: +With the OpenShift command line interface (CLI), you can create applications and +manage OpenShift projects from a terminal. The CLI is ideal in situations where +you are: -* Working directly with project source code -* Scripting OpenShift operations -* Restricted by bandwidth resources and cannot use the web console +* Working directly with project source code. +* Scripting OpenShift operations. +* Restricted by bandwidth resources and cannot use the Management Console. -The CLI commands are available directly from the OpenShift binary with the following syntax: +The OpenShift CLI is available using the `osc` binary: **** -`openshift cli [replaceable]##` +`$ osc __` **** -However, if `osc` is available on your workstation you can use it as a shortcut in place of `openshift cli` in the command syntax: - -**** -`osc [replaceable]##` -**** - -NOTE: Although `osc` is used in the command examples presented throughout this document, you can substitute `openshift cli` in the command syntax if `osc` is not available on your workstation. \ No newline at end of file +NOTE: The CLI command examples presented through OpenShift documentation use +`osc` command syntax. If the `osc` binary is not available on your workstation, +you can alternatively substitute `openshift cli` in the examples if you +have the `openshift` binary. diff --git a/creating_images/guidelines.adoc b/creating_images/guidelines.adoc index 815f348ddfc8..9559262fa6bc 100644 --- a/creating_images/guidelines.adoc +++ b/creating_images/guidelines.adoc @@ -10,137 +10,267 @@ toc::[] == Overview -When creating Docker images to run on OpenShift, there are a number of best practices to consider as an image author to ensure a good experience for consumers of those images. Because images are intended to be immutable and used as-is, the following guidelines help ensure that your images are highly consumable and easy to use on OpenShift. +When creating Docker images to run on OpenShift, there are a number of best +practices to consider as an image author to ensure a good experience for +consumers of those images. Because images are intended to be immutable and used +as-is, the following guidelines help ensure that your images are highly +consumable and easy to use on OpenShift. == General Docker Guidelines -The following guidelines apply when creating a Docker image in general, and are independent of whether the images are used on OpenShift. Also see the following references for more comprehensive guidelines: +The following guidelines apply when creating a Docker image in general, and are +independent of whether the images are used on OpenShift. Also see the following +references for more comprehensive guidelines: - Docker documentation - https://docs.docker.com/articles/dockerfile_best-practices/[Best practices for writing Dockerfiles] - Project Atomic documentation - http://www.projectatomic.io/docs/docker-image-author-guidance/[Guidance for Docker Image Authors] *Reuse Images* -Wherever possible, we recommend that you base your image on an appropriate upstream image using the `FROM` statement. This ensures your image can easily pick up security fixes from an upstream image when it is updated, rather than you having to update your dependencies directly. +Wherever possible, we recommend that you base your image on an appropriate +upstream image using the `FROM` statement. This ensures your image can easily +pick up security fixes from an upstream image when it is updated, rather than +you having to update your dependencies directly. -In addition, use tags in the `FROM` instruction (for example, `rhel:rhel7`) to make it clear to users exactly which version of an image your image is based on. Using a tag other than `latest` ensures your image is not subjected to breaking changes that might go into the `latest` version of an upstream image. +In addition, use tags in the `FROM` instruction (for example, `rhel:rhel7`) to +make it clear to users exactly which version of an image your image is based on. +Using a tag other than `latest` ensures your image is not subjected to breaking +changes that might go into the `latest` version of an upstream image. *Maintain Compatibility Within Tags* -When tagging your own images, we recommend that you try to maintain backwards compatibility within a tag. For example, if you provide an image named [sysitem]#foo# and it currently includes version 1.0, you might provide a tag of _foo:v1_. When you update the image, as long as it continues to be compatible with the original image, you can continue to tag the new image _foo:v1_, and downstream consumers of this tag will be able to get updates without being broken. +When tagging your own images, we recommend that you try to maintain backwards +compatibility within a tag. For example, if you provide an image named +_foo_ and it currently includes version 1.0, you might provide a tag of +_foo:v1_. When you update the image, as long as it continues to be compatible +with the original image, you can continue to tag the new image _foo:v1_, and +downstream consumers of this tag will be able to get updates without being +broken. -If you later release an incompatible update, then you should switch to a new tag, for example _foo:v2_. This allows downstream consumers to move up to the new version at will, but not be inadvertently broken by the new incompatible image. Any downstream consumer using _foo:latest_ takes on the risk of any incompatible changes being introduced. +If you later release an incompatible update, then you should switch to a new +tag, for example _foo:v2_. This allows downstream consumers to move up to the +new version at will, but not be inadvertently broken by the new incompatible +image. Any downstream consumer using _foo:latest_ takes on the risk of any +incompatible changes being introduced. *Avoid Multiple Processes* -We recommend that you do not start multiple services, such as a database and [sysitem]#sshd#, inside one container. This is not necessary because containers are lightweight and can be easily linked together for orchestrating multiple processes. OpenShift allows you to easily collocate and co-manage related images by grouping them into a single pod. +We recommend that you do not start multiple services, such as a database and +*SSHD*, inside one container. This is not necessary because containers +are lightweight and can be easily linked together for orchestrating multiple +processes. OpenShift allows you to easily collocate and co-manage related images +by grouping them into a single pod. -This collocation ensures the containers share a network namespace and storage for communication. Updates are also less disruptive as each image can be updated less frequently and independently. Signal handling flows are also clearer with a single process as you do not need to manage routing signals to spawned processes. +This collocation ensures the containers share a network namespace and storage +for communication. Updates are also less disruptive as each image can be updated +less frequently and independently. Signal handling flows are also clearer with a +single process as you do not need to manage routing signals to spawned +processes. *Use `exec` in Wrapper Scripts* -See the "Always `exec` in Wrapper Scripts" section of the http://www.projectatomic.io/docs/docker-image-author-guidance[Project Atomic documentation] for more information. +See the "Always `exec` in Wrapper Scripts" section of the +http://www.projectatomic.io/docs/docker-image-author-guidance[Project Atomic +documentation] for more information. -Also note that your process runs as PID 1 when running in a Docker container. This means that if your main process terminates, the entire container is stopped, killing any child processes you may have launched from your PID 1 process. +Also note that your process runs as PID 1 when running in a Docker container. +This means that if your main process terminates, the entire container is +stopped, killing any child processes you may have launched from your PID 1 +process. -See the http://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/["Docker and the PID 1 zombie reaping problem"] blog article for additional implications. Also see the https://felipec.wordpress.com/2013/11/04/init/["Demystifying the init system (PID 1)"] blog article for a deep dive on PID 1 and [sysitem]#init# systems. +See the +http://blog.phusion.nl/2015/01/20/docker-and-the-pid-1-zombie-reaping-problem/["Docker +and the PID 1 zombie reaping problem"] blog article for additional implications. +Also see the https://felipec.wordpress.com/2013/11/04/init/["Demystifying the +init system (PID 1)"] blog article for a deep dive on PID 1 and *init* +systems. *Clean Temporary Files* -All temporary files you create during the build process should be removed. This also includes any files added with the `ADD` command. For example, we strongly recommended that you run the `yum clean` command after performing `yum install` operations. +All temporary files you create during the build process should be removed. This +also includes any files added with the `ADD` command. For example, we strongly +recommended that you run the `yum clean` command after performing `yum install` +operations. -You can prevent the `yum` cache from ending up in an image layer by creating your `RUN` statement as follows: +You can prevent the `yum` cache from ending up in an image layer by creating +your `RUN` statement as follows: + +==== ---- RUN yum -y install mypackage && yum -y install myotherpackage && yum clean all -y ---- +==== Note that if you instead write: +==== + ---- RUN yum -y install mypackage RUN yum -y install myotherpackage && yum clean all -y ---- +==== -Then the first `yum` invocation leaves extra files in that layer, and these files cannot be removed when the `yum clean` operation is run later. The extra files are not visible in the final image, but they are present in the underlying layers. +Then the first `yum` invocation leaves extra files in that layer, and these +files cannot be removed when the `yum clean` operation is run later. The extra +files are not visible in the final image, but they are present in the underlying +layers. -The current Docker build process does not allow a command run in a later layer to shrink the space used by the image when something was removed in an earlier layer. However, this may change in the future. This means that if you perform an `rm` command in a later layer, although the files are hidden it does not reduce the overall size of the image to be downloaded. Therefore, as with the `yum clean` example, it is best to remove files in the same command that created them, where possible, so they do not end up written to a layer. +The current Docker build process does not allow a command run in a later layer +to shrink the space used by the image when something was removed in an earlier +layer. However, this may change in the future. This means that if you perform an +`rm` command in a later layer, although the files are hidden it does not reduce +the overall size of the image to be downloaded. Therefore, as with the `yum +clean` example, it is best to remove files in the same command that created +them, where possible, so they do not end up written to a layer. -In addition, performing multiple commands in a single `RUN` statement reduces the number of layers in your image, which improves download and extraction time. +In addition, performing multiple commands in a single `RUN` statement reduces +the number of layers in your image, which improves download and extraction time. *Place Instructions in the Proper Order* -Docker reads the [sysitem]#Dockerfile# and runs the instructions from top to bottom. Every instruction that is successfully executed creates a layer which can be reused the next time this or another image is built. It is very important to place instructions that will rarely change at the top of your [sysitem]#Dockerfile#. Doing so ensures the next builds of the same image are very fast because the cache is not invalidated by upper layer changes. +Docker reads the *_Dockerfile_* and runs the instructions from top to +bottom. Every instruction that is successfully executed creates a layer which +can be reused the next time this or another image is built. It is very important +to place instructions that will rarely change at the top of your +*_Dockerfile_*. Doing so ensures the next builds of the same image are +very fast because the cache is not invalidated by upper layer changes. + +For example, if you are working on a *_Dockerfile_* that contains an `ADD` +command to install a file you are iterating on, and a `RUN` command to `yum +install` a package, it is best to put the `ADD` command last: -For example, if you are working on a [sysitem]#Dockerfile# that contains an `ADD` command to install a file you are iterating on, and a `RUN` command to `yum install` a package, it is best to put the `ADD` command last: +==== ---- FROM foo RUN yum -y install mypackage && yum clean all -y ADD myfile /test/myfile ---- +==== -This way each time you edit `myfile` and rerun `docker build`, the system reuses the cached layer for the `yum` command and only generates the new layer for the `ADD` operation. +This way each time you edit *_myfile_* and rerun `docker build`, the system reuses +the cached layer for the `yum` command and only generates the new layer for the +`ADD` operation. -If instead you wrote the [sysitem]#Dockerfile# as: +If instead you wrote the *_Dockerfile_* as: + +==== ---- FROM foo ADD myfile /test/myfile RUN yum -y install mypackage && yum clean all -y ---- +==== -Then each time you changed `myfile` and reran `docker build`, the `ADD` operation would invalidate the `RUN` layer cache, so the `yum` operation would need to be rerun as well. +Then each time you changed *_myfile_* and reran `docker build`, the `ADD` +operation would invalidate the `RUN` layer cache, so the `yum` operation would +need to be rerun as well. *Mark Important Ports* -See the "Always `EXPOSE` Important Ports" section of the http://www.projectatomic.io/docs/docker-image-author-guidance[Project Atomic documentation] for more information. +See the "Always `EXPOSE` Important Ports" section of the +http://www.projectatomic.io/docs/docker-image-author-guidance[Project Atomic +documentation] for more information. *Set Environment Variables* -It is good practice to set environment variables with the `ENV` instruction. One example is to set the version of your project. This makes it easy for people to find the version without looking at the [sysitem]#Dockerfile#. Another example is advertising a path on the system that could be used by another process, such as `JAVA_HOME`. +It is good practice to set environment variables with the `ENV` instruction. +One example is to set the version of your project. This makes it easy for people +to find the version without looking at the *_Dockerfile_*. Another example is +advertising a path on the system that could be used by another process, such as +`*JAVA_HOME*`. *Avoid Default Passwords* -It is best to avoid setting default passwords. Many people will extend the image and forget to remove or change the default password. This can lead to security issues if a user in production is assigned a well-known password. Passwords should be configurable using an environment variable instead. See the link:#use-env-vars[Using Environment Variables for Configuration] topic for more information. +It is best to avoid setting default passwords. Many people will extend the image +and forget to remove or change the default password. This can lead to security +issues if a user in production is assigned a well-known password. Passwords +should be configurable using an environment variable instead. See the +link:#use-env-vars[Using Environment Variables for Configuration] topic for more +information. -If you do choose to set a default password, ensure that an appropriate warning message is displayed when the container is started. The message should inform the user of the value of the default password and explain how to change it, such as what environment variable to set. +If you do choose to set a default password, ensure that an appropriate warning +message is displayed when the container is started. The message should inform +the user of the value of the default password and explain how to change it, such +as what environment variable to set. *Avoid SSHD* -It is best to avoid running [sysitem]#SSHD# in your image. For accessing running containers, You can use the `docker exec` command locally to access containers that are running. Alternatively, you can use the OpenShift tooling since it allows you to execute arbitrary commands in images that are running. Installing and running [sysitem]#SSHD# in your image opens up additional vectors for attack and requirements for security patching. +It is best to avoid running *SSHD* in your image. For accessing running +containers, You can use the `docker exec` command locally to access containers +that are running. Alternatively, you can use the OpenShift tooling since it +allows you to execute arbitrary commands in images that are running. Installing +and running *SSHD* in your image opens up additional vectors for attack +and requirements for security patching. *Use Volumes for Persistent Data* -Images should use a https://docs.docker.com/reference/builder/#volume[Docker volume] for persistent data. This way OpenShift mounts the network storage to the node running the container, and if the container moves to a new node the storage is reattached to that node. By using the volume for all persistent storage needs, the content is preserved even if the container is restarted or moved. If your image writes data to arbitrary locations within the container, that content might not be preserved. +Images should use a https://docs.docker.com/reference/builder/#volume[Docker +volume] for persistent data. This way OpenShift mounts the network storage to +the node running the container, and if the container moves to a new node the +storage is reattached to that node. By using the volume for all persistent +storage needs, the content is preserved even if the container is restarted or +moved. If your image writes data to arbitrary locations within the container, +that content might not be preserved. -All data that needs to be preserved even after the container is destroyed must be written to a volume. With Docker 1.5, there will be a `readonly` flag for containers which can be used to strictly enforce good practices about not writing data to ephemeral storage in a container. Designing your image around that capability now will make it easier to take advantage of it later. +All data that needs to be preserved even after the container is destroyed must +be written to a volume. With Docker 1.5, there will be a `readonly` flag for +containers which can be used to strictly enforce good practices about not +writing data to ephemeral storage in a container. Designing your image around +that capability now will make it easier to take advantage of it later. -Furthermore, explicitly defining volumes in your [sysitem]#Dockerfile# makes it easy for consumers of the image to understand what volumes they need to define when running your image. +Furthermore, explicitly defining volumes in your *_Dockerfile_* makes it easy +for consumers of the image to understand what volumes they need to define when +running your image. -See the https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md[Kubernetes documentation] for more information on how volumes are used in OpenShift. +See the +https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md[Kubernetes +documentation] for more information on how volumes are used in OpenShift. //// For more information on how Volumes are used in OpenShift, see https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md[this documentation]. (NOTE to docs team: this link should really go to something in the openshift docs, once we have it) //// -NOTE: Even with persistent volumes, each instance of your image has its own volume, and the filesystem is not shared between instances. This means the volume cannot be used to share state in a cluster. +NOTE: Even with persistent volumes, each instance of your image has its own +volume, and the filesystem is not shared between instances. This means the +volume cannot be used to share state in a cluster. == OpenShift-Specific Guidelines -The following are guidelines that apply when creating Docker images specifically for use on OpenShift. +The following are guidelines that apply when creating Docker images specifically +for use on OpenShift. *Enable Images for Source-To-Image (STI)* -For images that are intended to run application code provided by a third party, such as a Ruby image designed to run Ruby code provided by a developer, you can enable your image to work with the https://github.com/openshift/source-to-image[Source-to-Image (STI)] build tool. STI is a framework which makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. +For images that are intended to run application code provided by a third party, +such as a Ruby image designed to run Ruby code provided by a developer, you can +enable your image to work with the +https://github.com/openshift/source-to-image[Source-to-Image (STI)] build tool. +STI is a framework which makes it easy to write images that take application +source code as an input and produce a new image that runs the assembled +application as output. -For example, this https://github.com/openshift/wildfly-8-centos[Wildfly image] defines STI scripts which run a `maven` build on a Java source repository and copy the resulting [sysitem]#war# file into the Wildfly deployments directory. The resulting image now automatically starts Wildfly with the application running. +For example, this https://github.com/openshift/wildfly-8-centos[Wildfly image] +defines STI scripts which run a `maven` build on a Java source repository and +copy the resulting *_war_* file into the Wildfly deployments directory. +The resulting image now automatically starts Wildfly with the application +running. -For more details about how to write STI scripts for your image, see the link:sti.html[STI Requirements] topic. +For more details about how to write STI scripts for your image, see the +link:sti.html[STI Requirements] topic. [[use-services]] *Use Services for Inter-image Communication* -For cases where your image needs to communicate with a service provided by another image, such as a web front end image that needs to access a database image to store and retrieve data, your image should consume an OpenShift link:../architecture/kubernetes_model.html#service[service]. Services provide a static endpoint for access which does not change as containers are stopped, started, or moved. In addition, services provide load balancing for requests. +For cases where your image needs to communicate with a service provided by +another image, such as a web front end image that needs to access a database +image to store and retrieve data, your image should consume an OpenShift +link:../architecture/core_objects/kubernetes_model.html#service[service]. +Services provide a static endpoint for access which does not change as +containers are stopped, started, or moved. In addition, services provide load +balancing for requests. //// For more information see https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/services.md[this documentation]. (NOTE to docs team: this link should really go to something in the openshift docs once we have it) @@ -148,25 +278,58 @@ For more information see https://github.com/GoogleCloudPlatform/kubernetes/blob/ *Provide Common Libraries* -For images that are intended to run application code provided by a third party, ensure that your image contains commonly used libraries for your platform. In particular, provide database drivers for common databases used with your platform. For example, provide JDBC drivers for MySQL and PostgreSQL if you are creating a Java framework image. Doing so prevents the need for common dependencies to be downloaded during application assembly time, speeding up application image builds. It also simplifies the work required by application developers to ensure all of their dependencies are met. +For images that are intended to run application code provided by a third party, +ensure that your image contains commonly used libraries for your platform. In +particular, provide database drivers for common databases used with your +platform. For example, provide JDBC drivers for MySQL and PostgreSQL if you are +creating a Java framework image. Doing so prevents the need for common +dependencies to be downloaded during application assembly time, speeding up +application image builds. It also simplifies the work required by application +developers to ensure all of their dependencies are met. [[use-env-vars]] *Use Environment Variables for Configuration* -Users of your image should be able to configure it without having to create a downstream image based on your image. This means that the runtime configuration should be handled using environment variables. -For a simple configuration, the running process can consume the environment variables directly. For a more complicated configuration or for runtimes which do not support this, configure the runtime by defining a template configuration file that is processed during startup. During this processing, values supplied using environment variables can be substituted into the configuration file or used to make decisions about what options to set in the configuration file. - -It is also possible and recommended to pass secrets such as certificates and keys into the container using environment variables. This ensures that the secret values do not end up committed in an image and leaked into a Docker registry. - -Providing environment variables allows consumers of your image to customize behavior, such as database settings, passwords, and performance tuning, without having to introduce a new layer on top of your image. Instead, they can simply define environment variable values when defining a pod and change those settings without rebuilding the image. - -For extremely complex scenarios, configuration can also be supplied using volumes that would be mounted into the container at runtime. However, if you elect to do it this way you must ensure that your image provides clear error messages on startup when the necessary volume or configuration is not present. - -This topic is related to the link:#use-services[Using Services for Inter-image Communication] topic in that configuration like datasources should be defined in terms of environment variables that provide the service endpoint information. This allows an application to dynamically consume a datasource service that is defined in the OpenShift environment without modifying the application image. - -In addition, tuning should be done by inspecting the [sysitem]#cgroups# settings for the container. This allows the image to tune itself to the available memory, CPU, and other resources. For example, Java-based images should tune their heap based on the [sysitem]#cgroup# maximum memory parameter to ensure they do not exceed the limits and get an out-of-memory error. - -See the following references for more on how to manage [sysitem]#cgroup# quotas in Docker containers: +Users of your image should be able to configure it without having to create a +downstream image based on your image. This means that the runtime configuration +should be handled using environment variables. For a simple configuration, the +running process can consume the environment variables directly. For a more +complicated configuration or for runtimes which do not support this, configure +the runtime by defining a template configuration file that is processed during +startup. During this processing, values supplied using environment variables can +be substituted into the configuration file or used to make decisions about what +options to set in the configuration file. + +It is also possible and recommended to pass secrets such as certificates and +keys into the container using environment variables. This ensures that the +secret values do not end up committed in an image and leaked into a Docker +registry. + +Providing environment variables allows consumers of your image to customize +behavior, such as database settings, passwords, and performance tuning, without +having to introduce a new layer on top of your image. Instead, they can simply +define environment variable values when defining a pod and change those settings +without rebuilding the image. + +For extremely complex scenarios, configuration can also be supplied using +volumes that would be mounted into the container at runtime. However, if you +elect to do it this way you must ensure that your image provides clear error +messages on startup when the necessary volume or configuration is not present. + +This topic is related to the link:#use-services[Using Services for Inter-image +Communication] topic in that configuration like datasources should be defined in +terms of environment variables that provide the service endpoint information. +This allows an application to dynamically consume a datasource service that is +defined in the OpenShift environment without modifying the application image. + +In addition, tuning should be done by inspecting the *cgroups* settings +for the container. This allows the image to tune itself to the available memory, +CPU, and other resources. For example, Java-based images should tune their heap +based on the *cgroup* maximum memory parameter to ensure they do not +exceed the limits and get an out-of-memory error. + +See the following references for more on how to manage *cgroup* quotas +in Docker containers: - Blog article - https://goldmann.pl/blog/2014/09/11/resource-management-in-docker[Resource management in Docker] - Docker documentation - https://docs.docker.com/articles/runmetrics[Runtime Metrics] @@ -174,21 +337,36 @@ See the following references for more on how to manage [sysitem]#cgroup# quotas *Set Image Metadata* -Defining image metadata helps OpenShift better consume your Docker images, allowing OpenShift to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed. +Defining image metadata helps OpenShift better consume your Docker images, +allowing OpenShift to create a better experience for developers using your +image. For example, you can add metadata to provide helpful descriptions of your +image, or offer suggestions on other images that may also be needed. -See the link:metadata.html[Image Metadata] topic for more information on supported metadata and how to define them. +See the link:metadata.html[Image Metadata] topic for more information on +supported metadata and how to define them. *Clustering* -You must fully understand what it means to run multiple instances of your image. In the simplest case, the load balancing function of a service handles routing traffic to all instances of your image. However, many frameworks need to share information in order to perform leader election or failover state; for example, in session replication. +You must fully understand what it means to run multiple instances of your image. +In the simplest case, the load balancing function of a service handles routing +traffic to all instances of your image. However, many frameworks need to share +information in order to perform leader election or failover state; for example, +in session replication. -Consider how your instances accomplish this communication when running in OpenShift. Although pods can communicate directly with each other, their IP addresses change anytime the pod starts, stops, or is moved. Therefore, it is important for your clustering scheme to be dynamic. +Consider how your instances accomplish this communication when running in +OpenShift. Although pods can communicate directly with each other, their IP +addresses change anytime the pod starts, stops, or is moved. Therefore, it is +important for your clustering scheme to be dynamic. *Logging* -It is best to send all logging to standard out. OpenShift collects standard out from containers and sends it to the centralized logging service where it can be viewed. If you need to separate log content, prefix the output with an appropriate keyword, which makes it possible to filter the messages. +It is best to send all logging to standard out. OpenShift collects standard out +from containers and sends it to the centralized logging service where it can be +viewed. If you need to separate log content, prefix the output with an +appropriate keyword, which makes it possible to filter the messages. -If your image logs to a file, users must use manual operations to enter the running container and retrieve or view the log file. +If your image logs to a file, users must use manual operations to enter the +running container and retrieve or view the log file. == External References * https://docs.docker.com/articles/basics[Docker basics] diff --git a/creating_images/metadata.adoc b/creating_images/metadata.adoc index e737dfa02e98..761e4d050a92 100644 --- a/creating_images/metadata.adoc +++ b/creating_images/metadata.adoc @@ -10,14 +10,25 @@ toc::[] == Overview -Defining image metadata helps OpenShift better consume your Docker images, allowing OpenShift to create a better experience for developers using your image. For example, you can add metadata to provide helpful descriptions of your image, or offer suggestions on other images that may also be needed. +Defining image metadata helps OpenShift better consume your Docker images, +allowing OpenShift to create a better experience for developers using your +image. For example, you can add metadata to provide helpful descriptions of your +image, or offer suggestions on other images that may also be needed. -NOTE: Currently, you can only define metadata for Docker images by specifying extra environment variables. There is upstream work currently being done to fix this situation which will be implemented in a later release. +NOTE: Currently, you can only define metadata for Docker images by specifying +extra environment variables. There is upstream work currently being done to fix +this situation which will be implemented in a later release. -This topic only defines the metadata needed by the current set of use cases. Additional metadata or use cases may be added in the future. +This topic only defines the metadata needed by the current set of use cases. +Additional metadata or use cases may be added in the future. == Defining Image Metadata -You can use the `ENV` instruction in a [filename]#Dockerfile# to define image metadata. This instruction is used to define environment variables that are available inside the container, allowing the application running in the container to consume them. See the https://docs.docker.com/reference/builder/#env[Docker documentation] for more information on the `ENV` instruction. +You can use the `ENV` instruction in a *_Dockerfile_* to define image +metadata. This instruction is used to define environment variables that are +available inside the container, allowing the application running in the +container to consume them. See the +https://docs.docker.com/reference/builder/#env[Docker documentation] for more +information on the `ENV` instruction. The environment variables are also available in the Docker image JSON representation, where the platform can see them and use them. @@ -28,49 +39,84 @@ representation, where the platform can see them and use them. |Variable |Description -|[envar]#IMAGE_TAGS# -|Specifies a list of tags for categorizing Docker images into broad areas of functionality. Tags help the UI and generation tools suggest relevant images during the application creation process. Example format: - -**** -[envar]#IMAGE_TAGS#=[replaceable]#database,mysql# -**** - -|[envar]#IMAGE_WANTS# -|Specifies a list of tags that the UI and generation tools might suggest to provide if you do not have the Docker images with given tags already. For example, if the image wants `mysql` and `redis`, and you do not have an image with a `redis` tag, then the UI might suggest you to add this image into your application. Example format: - -**** -[envar]#IMAGE_WANTS#=[replaceable]#mysql,redis# -**** - -|[envar]#IMAGE_DESCRIPTION# -|Gives image consumers a more detailed description about the service or functionality provided by the image. The UI can use this description together with the image name to give users more information about the image. Example format: - -**** -[envar]#IMAGE_DESCRIPTION#="[replaceable]#MySQL 5.5 database#" -**** - -|[envar]#IMAGE_EXPOSE_SERVICES# -|Contains a list of ports that match with the `EXPOSE` instructions in the [filename]#Dockerfile# and provides more descriptive information about what actual service the given port provides. - -The format is `PORT[/PROTO]:SERVICE_NAME` where `[PROTO]` is optional and defaults to `tcp` if unspecified. Example format: - -**** -[envar]#IMAGE_EXPOSE_SERVICES#="[replaceable]#3128/tcp:mysql,8080:http#" -**** - -|[envar]#IMAGE_NON_SCALABLE# -|Used to suggest to consumers through the UI that the image does not support scaling. Being non-scalable basically means that the value of `replicas` should initially not be set higher than `1`. Example format: - -**** -[envar]#IMAGE_NON_SCALABLE#=[replaceable]#true# -**** - -|[envar]#IMAGE_MIN_CPU# and [envar]#IMAGE_MIN_MEMORY# -|Specify the amount resources the image requires to work properly. The UI might warn the user that deploying this image may exceed the user quota. The values must be compatible with https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/resources.md#resource-quantities[Kubernetes quantity] values for CPU and memory. Example format: - -**** -[envar]#IMAGE_MIN_CPU#=[replaceable]#8Gi# - -[envar]#IMAGE_MIN_MEMORY#=[replaceable]#4# -**** +|`*IMAGE_TAGS*` +|Specifies a list of tags for categorizing Docker images into broad areas of +functionality. Tags help the UI and generation tools suggest relevant images +during the application creation process. Example format: + +==== + +---- +IMAGE_TAGS=database,mysql +---- +==== + +|`*IMAGE_WANTS*` +|Specifies a list of tags that the UI and generation tools might suggest to +provide if you do not have the Docker images with given tags already. For +example, if the image wants `mysql` and `redis`, and you do not have an image +with a `redis` tag, then the UI might suggest you to add this image into your +application. Example format: + +==== + +---- +IMAGE_WANTS=mysql,redis +---- +==== + +|`*IMAGE_DESCRIPTION*` +|Gives image consumers a more detailed description about the service or +functionality provided by the image. The UI can use this description together +with the image name to give users more information about the image. Example +format: + +==== + +---- +IMAGE_DESCRIPTION=MySQL 5.5 database +---- +==== + +|`*IMAGE_EXPOSE_SERVICES*` +|Contains a list of ports that match with the `EXPOSE` instructions in the +*_Dockerfile_* and provides more descriptive information about what +actual service the given port provides. + +The format is `PORT[/PROTO]:SERVICE_NAME` where `[PROTO]` is optional and +defaults to `tcp` if unspecified. Example format: + +==== + +---- +IMAGE_EXPOSE_SERVICES="3128/tcp:mysql,8080:http" +---- +==== + +|`*IMAGE_NON_SCALABLE*` +|Used to suggest to consumers through the UI that the image does not support +scaling. Being non-scalable basically means that the value of `replicas` should +initially not be set higher than `1`. Example format: + +==== + +---- +IMAGE_NON_SCALABLE=true +---- +==== + +|`*IMAGE_MIN_CPU*` and `*IMAGE_MIN_MEMORY*` +|Specify the amount resources the image requires to work properly. The UI might +warn the user that deploying this image may exceed the user quota. The values +must be compatible with +https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/resources.md#resource-quantities[Kubernetes +quantity] values for CPU and memory. Example format: + +==== + +---- +IMAGE_MIN_CPU=8Gi +IMAGE_MIN_MEMORY=4 +---- +==== |=== diff --git a/creating_images/sti.adoc b/creating_images/sti.adoc index 61107a022349..6ef15937809c 100644 --- a/creating_images/sti.adoc +++ b/creating_images/sti.adoc @@ -10,32 +10,43 @@ toc::[] == Overview -link:../architecture/openshift_model.html#source-to-image[Source-to-Image (STI)] is a framework that makes it easy -to write images that take application source code as an input and produce a new image that runs the assembled application as output. +link:../architecture/core_objects/builds.html#sti-build[Source-to-Image (STI)] +is a framework that makes it easy to write images that take application source +code as an input and produce a new image that runs the assembled application as +output. -The main advantage of using STI for building reproducible Docker images is the ease of use for developers. As a builder -image author, you must be aware of the two basic requirements for the best possible STI performance: the required image contents and STI scripts. +The main advantage of using STI for building reproducible Docker images is the +ease of use for developers. As a builder image author, you must be aware of the +two basic requirements for the best possible STI performance: the required image +contents and STI scripts. == Required Image Contents -The build process consists of three fundamental elements, which are combined into a final Docker image: +The build process consists of the following three fundamental elements, which +are combined into a final Docker image: - sources - STI scripts - builder image -During the build process, STI must place sources and scripts inside the builder image. To do so, STI creates a tar file that -contains the sources and scripts, then streams that file into the builder image. Before executing the `sti assemble` -script, STI untars that file and places its contents into the location specified with the `--location` flag or the -`STI_LOCATION` environment variable from the builder image, with the default location being the [filename]#/tmp# directory. - -For this tar process to happen, your image must supply the tar archiving utility (the `tar` command available in [filename]#$PATH#) -and the command line interpreter (the `/bin/sh` command); this allows your image to use the fastest possible build path. If the -`tar` or `/bin/sh` command is not available, the `sti build` script is forced to automatically perform an additional Docker build -to put both the sources and the scripts inside the image, and only then run the usual `sti build` procedure. +During the build process, STI must place sources and scripts inside the builder +image. To do so, STI creates a *_tar_* file that contains the sources and +scripts, then streams that file into the builder image. Before executing the +*_assemble_* script, STI untars that file and places its contents into the +location specified with the `--location` flag or the `*STI_LOCATION*` +environment variable from the builder image, with the default location being the +*_/tmp_* directory. + +For this *tar* process to happen, your image must supply the *tar* archiving +utility (the `tar` command available in `*$PATH*`) and the command line +interpreter (the `/bin/sh` command); this allows your image to use the fastest +possible build path. If the `tar` or `/bin/sh` command is not available, the +`sti build` script is forced to automatically perform an additional Docker build +to put both the sources and the scripts inside the image, and only then run the +usual `sti build` procedure. See the following diagram for the basic STI build workflow: -.Build workflow +.Build Workflow image::sti-flow.png[STI workflow] //// @@ -43,7 +54,8 @@ image::sti-flow.png[STI workflow] //// == STI Scripts -You can write STI scripts in any programming language as long as the scripts are executable inside the builder image. +You can write STI scripts in any programming language as long as the scripts are +executable inside the builder image. .STI Scripts [cols="3a,8a",options="header"] @@ -51,54 +63,63 @@ You can write STI scripts in any programming language as long as the scripts are |Script |Description -|[filename]#assemble# +|*_assemble_* (required) -|The [filename]#assemble# script builds the application artifacts from a source and places them into appropriate -directories inside the image. The workflow for this script is: +|The *_assemble_* script builds the application artifacts from a source +and places them into appropriate directories inside the image. The workflow for +this script is: -. Restore build artifacts. If you want to support incremental builds, make sure to define link:#save-artifacts[`save-artifacts`] as well. +. Restore build artifacts. If you want to support incremental builds, make sure to define link:#save-artifacts[*_save-artifacts_*] as well. . Place the application source in the desired location. . Build the application artifacts. . Install the artifacts into locations appropriate for them to run. -|[filename]#run# +|*_run_* (required) -|The [filename]#run# script executes your application. +|The *_run_* script executes your application. -|[filename]#save-artifacts# +|*_save-artifact_* (optional) -|The [filename]#save-artifacts# script gathers all dependencies that can speed up the build processes that follow. For example: +|The *_save-artifact_* script gathers all dependencies that can speed up the +build processes that follow. For example: -- For Ruby, `gems` is installed by Bundler. -- For Java, `.m2` contents are installed. +- For Ruby, *gems* is installed by Bundler. +- For Java, *.m2* contents are installed. -These dependencies are gathered into a tar file and streamed to the standard output. +These dependencies are gathered into a tar file and streamed to the standard +output. -|[filename]#usage# +|*_usage_* (optional) -|The [filename]#usage# script allows you to inform the user how to properly use your image. +|The *_usage_* script allows you to inform the user how to properly use your +image. -|[filename]#test/run# +|*_test/run_* (optional) -|The [filename]#test/run# script allows you to create a simple process to check if the image is working correctly. The proposed flow of that process is: +|The *_test/run_* script allows you to create a simple process to check if the +image is working correctly. The proposed flow of that process is: . Build the image. -. Run the image to verify the [filename]#usage# script. -. Run `sti build` to verify the [filename]#assemble# script. -. Run `sti build` again to verify the [filename]#save-artifacts# script and the [filename]#usage# script's restore artifacts functionality. (optional) +. Run the image to verify the *_usage_* script. +. Run `sti build` to verify the *_assemble_* script. +. Run `sti build` again to verify the *_save-artifacts_* script and the *_usage_* script's restore artifacts functionality. (optional) . Run the image to verify the test application is working. See the link:sti_testing.html[Testing STI Images] topic for more information. -NOTE: The suggested location to put the test application built by your [filename]#test/run# script is the [filename]#test/test-app# directory in your image repository. See the -https://github.com/openshift/source-to-image/blob/master/docs/cli.md#sti-create[STI documentation] for more information. +NOTE: The suggested location to put the test application built by your +*_test/run_* script is the *_test/test-app_* directory in your image repository. +See the +https://github.com/openshift/source-to-image/blob/master/docs/cli.md#sti-create[STI +documentation] for more information. |=== *Example STI Scripts* -NOTE: The following examples are written in Bash and it is assumed all tar contents are unpacked into the [filename]#/tmp/sti# directory. +NOTE: The following examples are written in Bash and it is assumed all tar +contents are unpacked into the *_/tmp/sti_* directory. -.[filename]#assemble# script: +.*_assemble_* script: ==== ---- @@ -122,7 +143,7 @@ popd ---- ==== -.[filename]#run# script: +.*_run_* script: ==== ---- @@ -133,7 +154,7 @@ popd ---- ==== -.[filename]#save-artifacts# script: +.*_save-artifacts_* script: ==== ---- @@ -149,7 +170,7 @@ popd ---- ==== -.[filename]#usage# script: +.*_usage_* script: ==== ---- @@ -165,24 +186,29 @@ EOF [[using-images-with-onbuild-instructions]] == Using Images with `ONBUILD` Instructions -The `ONBUILD` instructions can be found in many official Docker images. For example: +The `ONBUILD` instructions can be found in many official Docker images. For +example: - https://registry.hub.docker.com/u/library/ruby[Ruby] - https://registry.hub.docker.com/u/library/node[Node.js] - https://registry.hub.docker.com/u/library/python[Python] -See the https://docs.docker.com/reference/builder/#onbuild[Docker documentation] for more information on `ONBUILD`. +See the https://docs.docker.com/reference/builder/#onbuild[Docker documentation] +for more information on `ONBUILD`. -STI has a different strategy when a Docker image with `ONBUILD` instructions is used as a builder image for the application -source code. During the STI build, all `ONBUILD` instructions are executed in the order they were defined in the builder image -Dockerfile. The STI scripts are not required for this strategy, but they can be used as supplementary scripts to existing -`ONBUILD` instructions. +STI has a different strategy when a Docker image with `ONBUILD` instructions is +used as a builder image for the application source code. During the STI build, +all `ONBUILD` instructions are executed in the order they were defined in the +builder image Dockerfile. The STI scripts are not required for this strategy, +but they can be used as supplementary scripts to existing `ONBUILD` +instructions. -Many official Docker images that use `ONBUILD` do not declare the image `CMD` or `ENTRYPOINT`, and for that, STI must know -how to run your application. There are two methods for defining the `ENTRYPOINT`: +Many official Docker images that use `ONBUILD` do not declare the image `CMD` or +`ENTRYPOINT`, and for that, STI must know how to run your application. There are +two methods for defining the `ENTRYPOINT`: -- Include the [filename]#run# script in your application root folder. STI then recognizes it and sets it as the application image `ENTRYPOINT`. +- Include the *_run_* script in your application root folder. STI then recognizes it and sets it as the application image `ENTRYPOINT`. -- Use the STI scripts. If you provide the URL from where the STI can fetch the scripts, the STI [filename]#run# script is then -set as an image `ENTRYPOINT`. If the STI scripts location also includes the [filename]#assemble# script, the script is then +- Use the STI scripts. If you provide the URL from where the STI can fetch the scripts, the STI *_run_* script is then +set as an image `ENTRYPOINT`. If the STI scripts location also includes the *_assemble_* script, the script is then executed as the last instruction of the Docker build. diff --git a/creating_images/sti_testing.adoc b/creating_images/sti_testing.adoc index 4d3cec193216..6a0f91910f6b 100644 --- a/creating_images/sti_testing.adoc +++ b/creating_images/sti_testing.adoc @@ -10,30 +10,55 @@ toc::[] == Overview -As an STI builder image author, you can test your STI image locally and use the OpenShift build system for automated testing and continuous integration. +As an STI builder image author, you can test your STI image locally and use the +OpenShift build system for automated testing and continuous integration. -NOTE: See the link:sti.html[STI Requirements] topic to learn more about the STI architecture before proceeding. +NOTE: See the link:sti.html[STI Requirements] topic to learn more about the STI +architecture before proceeding. -As described in the link:sti.html[STI Requirements] topic, STI requires the `assemble` and `run` scripts to be present in order to successfully execute the STI build. Providing the `save-artifacts` script reuses the build artifacts, and providing the `usage` script ensures that usage information is printed to console when someone runs the Docker image outside of the STI. +As described in the link:sti.html[STI Requirements] topic, STI requires the +*_assemble_* and *_run_* scripts to be present in order to successfully execute +the STI build. Providing the *_save-artifacts_* script reuses the build +artifacts, and providing the *_usage_* script ensures that usage information is +printed to console when someone runs the Docker image outside of the STI. -The goal of testing an STI image is to make sure that all of these described commands work properly, even if the base Docker image has changed or the tooling used by the commands was updated. +The goal of testing an STI image is to make sure that all of these described +commands work properly, even if the base Docker image has changed or the tooling +used by the commands was updated. == Testing Requirements -The standard location for the `test` script is [filename]#test/run#. This script is invoked by the OpenShift STI image builder and it could be a simple Bash script or a static Go binary. +The standard location for the *_test_* script is *_test/run_*. This script is +invoked by the OpenShift STI image builder and it could be a simple Bash script +or a static Go binary. -Because the `test/run` script performs the STI build, you must have the STI binary available on your system. See the STI https://github.com/openshift/source-to-image/blob/master/README.md#installation[README] file and follow the installation instructions if required. +Because the *_test/run_* script performs the STI build, you must have the STI +binary available on your system. See the STI +https://github.com/openshift/source-to-image/blob/master/README.md#installation[README] +file and follow the installation instructions if required. -Because STI puts together the application source code and Docker image, to test it you need a sample application source you can use during the test to verify that the application source is successfully converted into a Docker image. The sample application should be simple, but it should also exercise the `assemble` command. +Because STI puts together the application source code and Docker image, to test +it you need a sample application source you can use during the test to verify +that the application source is successfully converted into a Docker image. The +sample application should be simple, but it should also exercise the `assemble` +command. == Using `sti create` -The STI tooling comes with powerful generation tools to speed up the process of creating a new STI image. The https://github.com/openshift/source-to-image/blob/master/docs/cli.md#sti-create[`sti create` command] produces all necessary STI scripts and testing tools along with the [filename]#Makefile#. The generated [filename]#test/run# script must be adjusted to be useful, but it provides a good starting point to begin developing. +The STI tooling comes with powerful generation tools to speed up the process of +creating a new STI image. The +https://github.com/openshift/source-to-image/blob/master/docs/cli.md#sti-create[`sti +create` command] produces all necessary STI scripts and testing tools along with +the *_Makefile_*. The generated *_test/run_* script must be adjusted to be +useful, but it provides a good starting point to begin developing. -NOTE: The [filename]#test/run# script produced by the `sti create` command requires that the sample application sources are inside the [filename]#test/test-app# directory. +NOTE: The *_test/run_* script produced by the `sti create` command requires that the sample application sources are inside the *_test/test-app_* directory. == Testing Locally -The easiest way to run the STI image tests locally is to use the generated [filename]#Makefile#. If you did not use the `sti create` command, you can copy the following [filename]#Makefile# template and replace the `IMAGE_NAME` parameter with your image name. +The easiest way to run the STI image tests locally is to use the generated +*_Makefile_*. If you did not use the `sti create` command, you can copy the +following *_Makefile_* template and replace the `*IMAGE_NAME*` parameter with +your image name. -.Sample Makefile +.Sample *_Makefile_* ==== ---- @@ -50,43 +75,61 @@ test: ==== == Basic Testing Workflow -The `test` script assumes you have already built the image that you want to test. If required, first build the STI image using: +The *_test_* script assumes you have already built the image that you want to +test. If required, first build the STI image using: ----- -docker build -t BUILDER_IMAGE_NAME ----- +**** +`$ docker build -t __` +**** The following steps describe the default workflow to test STI image builders: -. Verify the `usage` script is working: +. Verify the *_usage_* script is working: + +==== + ---- -docker run BUILDER_IMAGE_NAME +$ docker run BUILDER_IMAGE_NAME ---- -+ +==== + . Build the image: + +==== + +[options="nowrap"] ---- -sti build file:///path-to-sample-app BUILDER_IMAGE_NAME OUTPUT_APPLICATION_IMAGE_NAME +$ sti build file:///path-to-sample-app BUILDER_IMAGE_NAME OUTPUT_APPLICATION_IMAGE_NAME ---- -+ -. If you support `save-artifacts`, execute step 2 again to verify that restoring artifacts works properly. -+ +==== + +. If you support *_save-artifacts_*, execute step 2 again to verify that restoring artifacts works properly. + . Run the container: + +==== + ---- -docker run OUTPUT_APPLICATION_IMAGE_NAME +$ docker run OUTPUT_APPLICATION_IMAGE_NAME ---- +==== + . Verify the container is running and the application is responding. -Executing these steps is generally enough to tell if the STI scripts are operating properly. +Executing these steps is generally enough to tell if the STI scripts are +operating properly. == Using OpenShift Build for Automated Testing -Another way you can execute the STI image tests is to use the OpenShift platform itself as a continuous integration system. The OpenShift platform is capable of building Docker images and is highly customizable. +Another way you can execute the STI image tests is to use the OpenShift platform +itself as a continuous integration system. The OpenShift platform is capable of +building Docker images and is highly customizable. -To set up a STI image builder CI, define a special `CustomBuild` and use the [sysitem]#openshift/sti-image-builder# image. This image executes all the steps mentioned in the link:#basic-testing-workflow[Basic Testing Workflow] section and creates a new STI builder image. +To set up a STI image builder CI, define a special `*CustomBuild*` and use the +*openshift/sti-image-builder* image. This image executes all the steps mentioned +in the link:#basic-testing-workflow[Basic Testing Workflow] section and creates +a new STI builder image. -.Sample `CustomBuild` +.Sample `*CustomBuild*` ==== ---- @@ -134,12 +177,20 @@ To set up a STI image builder CI, define a special `CustomBuild` and use the [sy ---- ==== -You can use the `osc create` command to create this `BuildConfig`. After the `BuildConfig` is created, you can start the build using the following command: +You can use the `osc create` command to create this `*BuildConfig*`. After the +`*BuildConfig*` is created, you can start the build using the following command: + +==== ---- -osc start-build ruby-20-centos7-build +$ osc start-build ruby-20-centos7-build ---- +==== -If your OpenShift instance is hosted on a public IP address, then the build is triggered each time you push into your STI builder image GitHub repository. +If your OpenShift instance is hosted on a public IP address, then the build is +triggered each time you push into your STI builder image GitHub repository. -You can also use the `CustomBuild` to trigger a rebuild for your applications based on the STI image you updated. In that case, you must specify the `Output` field in the `parameters` section and define to which Docker registry the image should be pushed after a successful build. +You can also use the `*CustomBuild*` to trigger a rebuild for your applications +based on the STI image you updated. In that case, you must specify the `Output` +field in the `parameters` section and define to which Docker registry the image +should be pushed after a successful build. diff --git a/dev_guide/builds.adoc b/dev_guide/builds.adoc index 863ee9e029a8..2487a36765d6 100644 --- a/dev_guide/builds.adoc +++ b/dev_guide/builds.adoc @@ -10,13 +10,12 @@ toc::[] == Overview +A link:../architecture/core_objects/builds.html[build] is a process of creating +runnable images to be used on OpenShift. There are three types of builds: -A link:../architecture/builds.html[build] is a process of creating runnable -images to be used on OpenShift. There are three types of builds: - -* Docker build -* STI build -* custom build +- Docker build +- STI build +- custom build == Starting a Build You can manually invoke a build using the following command: @@ -51,13 +50,14 @@ To allow access to build logs, use the following command: `$ osc build-logs __` **** -*STI Build logs* +*STI Build Logs* + +link:../architecture/core_objects/builds.html#sti-build[STI builds] by default +show full output of the `assemble` script and all the errors that happen in the +mean time. To enable more verbose output, you can pass the `*BUILD_LOGLEVEL*` +environment variable as part of the `*stiStrategy*` in BuildConfig: -STI Build by default shows full output of the `assemble` script and all the errors -that happen in the mean time. If you're interested in a more verbose output there are -two options to do that: -. Increase verbosity of the entire OpenShift instance by passing `--loglevel` to `openshift start` command, STI Builder inherits the value of that flag. -. Pass `BUILD_LOGLEVEL` environment variable as part of the `stiStrategy` in BuildConfig: +==== ---- { @@ -66,24 +66,36 @@ two options to do that: "env": [ { "Name": "BUILD_LOGLEVEL", - "Value": "2" + "Value": "2" <1> } ] } } ---- -Available loglevels for STI are as follows: -- `Level 0` - produces output from containers running `assemble` script and all encountered errors (the default) -- `Level 1` - produces basic information about the executed process -- `Level 2` - produces very detailed information about the executed process -- `Level 3` - produces very detailed information about the executed process, alongside with listing tar contents +<1> Adjust this value to the desired log level. +==== + +NOTE: A platform administrator can increase verbosity for the entire OpenShift +instance by passing the `--loglevel` flag to the `openshift start` command. The +STI builder inherits the value of that flag, which increases verbosity for all +STI build logs. + +Available log levels for STI are as follows: + +[horizontal] +Level 0:: Produces output from containers running the *_assemble_* script and +all encountered errors. (Default) +Level 1:: Produces basic information about the executed process +Level 2:: Produces very detailed information about the executed process +Level 3:: Produces very detailed information about the executed process, along +with listing *tar* contents. == Source Code -The source code location is one of the required parameters for the BuildConfig. -The build uses this location and fetches the source code that is later built. -The source code location definition is part of the *`parameters`* section in the -BuildConfig: +The source code location is one of the required parameters for the +`*BuildConfig*`. The build uses this location and fetches the source code that +is later built. The source code location definition is part of the +`*parameters*` section in the `*BuildConfig*`: ==== @@ -99,12 +111,12 @@ BuildConfig: } ---- -<1> The `type` field describes what SCM is used to fetch your source code. -<2> In this example, the `git` field contains the URI to the remote Git -repository where your source code lives. It might optionally specify the `ref` -field if you want to check out a specific Git reference. A valid `ref` can be a -SHA1 tag or a branch name. -<3> The `contextDir` field allows you to override the default location inside +<1> The `*type*` field describes what SCM is used to fetch your source code. +<2> In this example, the `*git*` field contains the URI to the remote Git +repository where your source code lives. It might optionally specify the `*ref*` +field if you want to check out a specific Git reference. A valid `*ref*` can be +a SHA1 tag or a branch name. +<3> The `*contextDir*` field allows you to override the default location inside the source code repository, where the build looks for the application source code. If your application exists inside a sub-directory, you can override the default location (the root folder) using this field. @@ -113,11 +125,12 @@ default location (the root folder) using this field. [[using-the-sti-environment-file]] == STI Environment File -link:../image_writers_guide/sti.html[STI] enables you to set environment values -in your application by specifying them in a *_.sti/environment_* file in the -source repository. The environment variables are then present during the build -process and in the final docker image. The complete list of supported -environment variables are available in the documentation for each image. +link:../architecture/core_objects/builds.html#sti-build[STI] enables you to set +environment values in your application by specifying them in a +*_.sti/environment_* file in the source repository. The environment variables +are then present during the build process and in the final docker image. The +complete list of supported environment variables are available in the +documentation for each image. If you provide a *_.sti/environment_* file in your source repository, STI reads this file during the build. This allows customization of the build behavior as @@ -134,12 +147,9 @@ the running application itself. For example, you can add application to be started in `development` mode instead of `production`. == Build Triggers -When defining a BuildConfig, you can define triggers to control the -circumstances in which a build should be run for the BuildConfig. There are two -types of triggers available: - -* Webhook -* Image change +When defining a `*BuildConfig*`, you can define webhook triggers or image change +triggers to control the circumstances in which a build should be run for the +`*BuildConfig*`. === Webhook Triggers Webhook triggers allow you to trigger a new build by sending a request to the @@ -153,7 +163,7 @@ call made by GitHub when a repository is updated. When defining the trigger, you can specify a *secret* as part of the URL you supply to GitHub when configuring the webhook. The *secret* ensures that only you and your repository can trigger the build. The following example is a trigger definition -JSON within the BuildConfig: +JSON within the `*BuildConfig*`: ==== @@ -179,7 +189,7 @@ The payload URL is returned as the GitHub Webhook URL by the `describe` command Generic webhooks can be invoked from any system capable of making a web request. As with a GitHub webhook, you specify a *secret* when defining the trigger, and the caller must provide this *secret* to trigger the build. The -following is an example trigger definition JSON within the BuildConfig: +following is an example trigger definition JSON within the `*BuildConfig*`: ==== @@ -202,6 +212,8 @@ webhook endpoint for your build: The endpoint can accept an optional payload with the following format: +==== + ---- { type: 'git', @@ -221,18 +233,19 @@ The endpoint can accept an optional payload with the following format: } } ---- +==== [#describe-buildconfig] *Displaying a BuildConfig's Webhook URLs* -Use the following command to display the Webhook URLs associated with a build +Use the following command to display the webhook URLs associated with a build configuration: **** `osc describe buildConfig __` **** -If the above command does not display any Webhook URLs, then no Webhook trigger +If the above command does not display any webhook URLs, then no webhook trigger is defined for that build configuration. === Image Change Triggers @@ -244,7 +257,7 @@ latest RHEL base image. Configuring an image change trigger requires the following actions: -1. Define an ImageRepository that points to the upstream image you want to +1. Define an `*ImageRepository*` that points to the upstream image you want to trigger: + ==== @@ -265,8 +278,7 @@ located at `__/__/ruby-20-centos7`. The `__` is defined as a service with the name `docker-registry` running in OpenShift. -2. Define a build with a strategy that consumes some upstream image; for -example: +2. Define a build with a strategy that consumes some upstream image: + ==== @@ -306,11 +318,13 @@ This defines an image change trigger which monitors the `ruby-20-centos7` ImageRepository defined earlier. Specifically, it monitors for changes to the `latest` tag in that repository. When a change occurs, a new build is triggered and is supplied with an immutable Docker tag that points to the new image that -was just created. Wherever the BuildConfig previously referenced +was just created. + +Wherever the `*BuildConfig*` previously referenced `172.30.17.3:5001/mynamespace/ruby-20-centos7` (as defined by the image change trigger's image field), the value is replaced with the new immutable image tag; for example, the newly-created build's definition: -+ + ==== ---- @@ -322,24 +336,24 @@ for example, the newly-created build's definition: } ---- ==== -+ + This ensures that the triggered build uses the new image that was just pushed to the repository, and the build can be re-run anytime with exactly the same inputs. -For link:../openshift_sti_images/overview.html[STI type builds], the field that -is matched and replaced is the `stiStrategy.image` field. For Docker builds, the -field is `dockerStrategy.baseImage`. For Custom builds, the -`customStrategy.image` field is updated. +For link:../architecture/core_objects/builds.html#sti-build[STI type builds], +the field that is matched and replaced is the `*stiStrategy.image*` field. For +Docker builds, the field is `*dockerStrategy.baseImage*`. For custom builds, the +`*customStrategy.image*` field is updated. -In addition, for custom builds, the `OPENSHIFT_CUSTOM_BUILD_BASE_IMAGE` +In addition, for custom builds, the `*OPENSHIFT_CUSTOM_BUILD_BASE_IMAGE*` environment variable is checked. If it does not exist, then it is created with the immutable image reference. If it does exist and the value matches the image field of the image change trigger, then it is updated with the immutable image reference. -If an ImageChange trigger is defined on a BuildConfig and a build is -triggered (due to an image change, webhook trigger, or manual request), -then the build that is created uses the *immutableid* resolved from the -ImageRepository pointed to by the ImageChange trigger. This ensures that builds -are performed using consistent image tags for ease of reproduction. +If an `*imageChange*` trigger is defined on a `*BuildConfig*` and a build is +triggered (due to an image change, webhook trigger, or manual request), then the +build that is created uses the `*immutableid*` resolved from the +`*ImageRepository*` pointed to by the `*imageChange*` trigger. This ensures that +builds are performed using consistent image tags for ease of reproduction. diff --git a/dev_guide/deployments.adoc b/dev_guide/deployments.adoc index 789b82a57405..c77cf2d8a2f4 100644 --- a/dev_guide/deployments.adoc +++ b/dev_guide/deployments.adoc @@ -5,29 +5,37 @@ :icons: :experimental: :toc: macro -:toc-title: +:toc-title: toc::[] == Overview +A deployment in OpenShift is an update to a single replication controller's +pod template based on triggered events. The `*deployment*` subsystem provides: -In OpenShift, a deployment is an update to a single replication controller's pod template based on triggered events. The deployment subsystem provides: +- A link:#defining-a-deploymentConfig[declarative definition] of a desired deployment configuration which drives automated deployments by the system. +- link:#triggers[Triggers] which drive new deployments in response to events. +- link:#rollbacks[Rollbacks] to a previous deployment. +- link:#strategies[Strategies] for deployment rollout behavior which are user-customizable. +- Audit history of deployed pod template configurations. -* link:#defining-a-deploymentConfig[Declarative definition] of a desired deployment configuration which drives automated deployments by the system -* link:#triggers[Triggers] which drive new deployments in response to events -* link:#rollbacks[Rollback] to a previous deployment -* link:#strategies[Strategies] for deployment rollout behavior which are user-customizable -* Audit history of deployed pod template configurations +A `*deploymentConfig*` describes a single link:templates.html[template] and a +set of link:#triggers[triggers] for when a new deployment should be created. +A deployment is simply a specially annotated `*replicationController*`. A +link:#strategies[strategy] is responsible for making a deployment live in the +cluster. -==== Concepts - -An OpenShift `deploymentConfig` describes a single `template` and a set of `triggers` for when a new `deployment` should be created. A `deployment` is simply a specially annotated `replicationController`. A `strategy` is responsible for making a `deployment` live in the cluster. - -Each time a new deployment is created, the `latestVersion` field of `deploymentConfig` is incremented, and a `deploymentCause` is added to the `deploymentConfig` describing the change that led to the latest deployment. +Each time a new deployment is created, the `*latestVersion*` field of +`*deploymentConfig*` is incremented. A `*deploymentCause*` is also added to the +`*deploymentConfig*` describing the change that led to the latest deployment. == Defining a deploymentConfig +A `*deploymentConfig*` is a REST object which can be used in a `POST` to the +API server to create a new instance. Consider the following simple, but +complete, configuration which should result in a new deployment every time a +Docker image tag changes: -A `deploymentConfig` in OpenShift is a REST object which can be POSTed to the API server to create a new instance. Consider a simple configuration which should result in a new `deployment` every time a Docker image tag changes. +==== [source,json] ---- @@ -85,17 +93,28 @@ A `deploymentConfig` in OpenShift is a REST object which can be POSTed to the AP } ---- -<1> This specification will create a new `deploymentConfig` named `frontend`. -<2> The Recreate `strategy` makes the `deployment` live by disabling any prior `deployment` and increasing the replica count of the new `deployment`. -<3> A single ImageChange `trigger` is defined, which causes a new `deployment` to be created each time the `openshift/origin-ruby-sample:latest` tag value changes. +<1> This specification will create a new `*deploymentConfig*` named +`*frontend*`. +<2> The Recreate `*strategy*` makes the deployment live by disabling any prior +`deployment` and increasing the replica count of the new `deployment`. +<3> A single `*ImageChange*` trigger is defined, which causes a new deployment +to be created each time the *openshift/origin-ruby-sample:latest* tag value +changes. +==== -## Strategies +== Strategies +A `*deploymentConfig*` has a `*strategy*` which is responsible for making new +deployments live in the cluster. Each application has different requirements for +availability (and other considerations) during deployments. OpenShift provides +out-of-the-box strategies to support a variety of deployment scenarios: -A `deploymentConfig` has a `strategy` which is responsible for making new deployments live in the cluster. Each application has different requirements for availability (and other considerations) during deployments. OpenShift provides out-of-the-box strategies to support a variety of deployment scenarios: +*Recreate Strategy* [[recreate-strategy]] -===== Recreate strategy +The `*Recreate*` `*strategy*` has basic rollout behavior, and supports +link:#lifecycle-hooks[lifecycle hooks] for injecting code into the deployment +process. -The Recreate `strategy` has basic rollout behavior, and supports link:#lifecycle-hooks[lifecycle hooks] for injecting code into the deployment process. +==== [source,json] ---- @@ -108,23 +127,30 @@ The Recreate `strategy` has basic rollout behavior, and supports link:#lifecycle } ---- -<1> `recreateParams` are *optional*. -<2> `pre` and `post` are both link:#lifecycle-hooks[lifecycle hooks]. +<1> `*recreateParams*` are optional. +<2> `*pre*` and `*post*` are both link:#lifecycle-hooks[lifecycle hooks]. +==== + +The algorithm for the `*Recreate*``*strategy*` is: -The algorithm for this `strategy` is: +. Execute any `*pre*` lifecycle hook. +. Increase the replica count of the new deployment to the replica count +defined on the deployment configuration. +. Find and disable previous deployments by reducing their replica count to `0`. +. Execute any `post` lifecycle hook -1. Execute any `pre` lifecycle hook -2. Increase the replica count of the new `deployment` to the replica count defined on the deployment configuration -3. Find and disable previous `deployments` (by reducing their replica count to 0) -4. Execute any `post` lifecycle hook +link:#lifecycle-hooks[Lifecycle hooks] are specified in the `*recreateParams*` +for the strategy. -link:#lifecycle-hooks[Lifecycle hooks] are specified in the `recreateParams` for the strategy. +IMPORTANT: The `*Abort*` lifecycle hook failure policy is not supported for the +`*post*` hook in this strategy; any `*post*` hook failure will be ignored. -IMPORTANT: The `Abort` lifecycle hook failure policy is *not* supported for the `post` hook in this strategy; any `post` hook failure will be ignored. +*Custom Strategy* [[custom-strategy]] -===== Custom strategy +The `*Custom*` `*strategy*` allows users of OpenShift to provide their own +deployment behavior. -The Custom `strategy` allows users of OpenShift to provide their own deployment behavior. +==== [source,json] ---- @@ -142,27 +168,38 @@ The Custom `strategy` allows users of OpenShift to provide their own deployment } } ---- +==== -With this specification, the `organization/strategy` Docker image will carry out the `strategy` behavior. The optional `command` array overrides any `CMD` directive specified in the image's Dockerfile. The optional `environment` variables provided will be added to the execution environment of the `strategy` process. +With this specification, the *organization/strategy* Docker image carries out +the `*strategy*` behavior. The optional `*command*` array overrides any `CMD` +directive specified in the image's *_Dockerfile_*. The optional `*environment*` +variables provided are added to the execution environment of the `*strategy*` +process. -Additionally, the following environment variables are provided by OpenShift to the `strategy` process: +Additionally, the following environment variables are provided by OpenShift to +the `*strategy*` process: [cols="4,8",options="header"] |=== |Environment Variable |Description -.^|`OPENSHIFT_DEPLOYMENT_NAME` -|The name of the new `deployment` (a `replicationController`) +.^|`*OPENSHIFT_DEPLOYMENT_NAME*` +|The name of the new deployment (a `*replicationController*`). -.^|`OPENSHIFT_DEPLOYMENT_NAMESPACE` -|The namespace of the new `deployment` +.^|`*OPENSHIFT_DEPLOYMENT_NAMESPACE*` +|The namespace of the new deployment. |=== -The replica count of the new `deployment` will be 0 initially. The responsibility of the `strategy` is to make the new `deployment` live using whatever logic best serves the needs of the user. +The replica count of the new deployment will be `0` initially. The +responsibility of the `*strategy*` is to make the new deployment live using +whatever logic best serves the needs of the user. == Lifecycle Hooks +Deployment strategies may support lifecycle hooks, which allow you to +inject behavior into the deployment process at predefined points within the +strategy. Consider this partially defined hook: -Deployment strategies may support lifecycle hooks, which allow the user to inject behavior into the deployment process at predefined points within the strategy. Consider this partially defined hook. +==== [source,json] ---- @@ -171,21 +208,34 @@ Deployment strategies may support lifecycle hooks, which allow the user to injec "execNewPod": {} <1> } ---- -<2> `execNewPod` is the type of this lifecycle hook, and is link:#pod-based-lifecycle-hook[documented separately]. +<1> `*execNewPod*` is the type of this lifecycle hook, and is +link:#pod-based-lifecycle-hook[documented separately]. +==== -Every hook has a `failurePolicy` which defines the action the strategy should take when a hook failure is encountered. Possible values are: +Every hook has a `*failurePolicy*` which defines the action the strategy should take when a hook failure is encountered. Possible values are: -* `Abort` - the deployment should be considered a failure if the hook fails. -* `Retry` - the hook execution should be retried until it succeeds. -* `Ignore` - any hook failure should be ignored and deployment should proceeed. +[horizontal] +Abort:: The deployment should be considered a failure if the hook fails. +Retry:: The hook execution should be retried until it succeeds. +Ignore:: Any hook failure should be ignored and the deployment should proceeed. -WARNING: Some hook points for a strategy might support only a subset of `failurePolicy` values. For example, the `Recreate` strategy does not currently support the `Abort` policy for its "post" deployment lifecycle hook point. Check the documentation for a given strategy to learn more about its support for lifecycle hooks. +WARNING: Some hook points for a strategy might support only a subset of +`*failurePolicy*` values. For example, the `*Recreate*` strategy does not +currently support the `*Abort*` policy for its `*post*` deployment lifecycle +hook point. Check the documentation for a given strategy to learn more about its +support for lifecycle hooks. -Hooks have a type specific field which describes how to execute the hook. Currently `execNewPod` is the only supported type. +Hooks have a type specific field which describes how to execute the hook. +Currently `*execNewPod*` is the only supported type. -===== Pod-based lifecycle hook +*Pod-based Lifecycle Hook* [[pod-based-lifecycle-hook]] -The `execNewPod` hook type executes lifecycle hook code in a new pod derived from the pod template in a `deploymentConfig`. Consider this simplified example `deploymentConfig` which uses the link:#recreate-strategy[Recreate strategy]. +The `*execNewPod*` hook type executes lifecycle hook code in a new pod derived +from the pod template in a `*deploymentConfig*`. Consider this simplified +example `*deploymentConfig*` which uses the link:#recreate-strategy[`*Recreate*` +`*strategy*`]. + +==== [source,json] ---- @@ -231,20 +281,31 @@ The `execNewPod` hook type executes lifecycle hook code in a new pod derived fro } } ---- -<1> `containerName` corresponds to `podTemplate.containers[0].name`. -<2> `command` overrides any `ENTRYPOINT` defined in the image used by `containerName`. -<3> `env` is an *optional* set of environment variables for the hook container. - - -In this example, the `pre` hook will be executed in a new pod using the `openshift/origin-ruby-sample` image from the `helloworld` container. The hook command will be `/usr/bin/command arg1 arg2`, and the hook pod will have `CUSTOM_VAR1=custom_value1` in its environment. Because the `failurePolicy` is `Abort`, if the hook fails, the deployment will fail (as supported by the Recreate strategy). +<1> `*containerName*` corresponds to `*podTemplate.containers[0].name*`. +<2> `*command*` overrides any `ENTRYPOINT` defined in the image used by +`*containerName*`. +<3> `*env*` is an optional set of environment variables for the hook container. +==== + +In this example, the `*pre*` hook will be executed in a new pod using the +*openshift/origin-ruby-sample* image from the *helloworld* container. The hook +command will be `/usr/bin/command arg1 arg2`, and the hook pod will have +`*CUSTOM_VAR1=custom_value1*` in its environment. Because the `*failurePolicy*` +is `*Abort*`, if the hook fails, the deployment will fail (as supported by the +`*Recreate*` `*strategy*`). == Triggers -A `deploymentConfig` contains `triggers` which drive the creation of new deployments in response to events (both inside and outside OpenShift). The following trigger types are supported: +A `*deploymentConfig*` contains triggers which drive the creation of new +deployments in response to events, both inside and outside OpenShift. The +following trigger types are supported: + +*_ImageChange_ Triggers* [[image-change-triggers]] -===== Image change triggers +The `*ImageChange*` trigger results in a new deployment whenever the value +of a Docker `*imageRepository*` tag value changes. Consider an example trigger: -The ImageChange `trigger` will result in a new deployment whenever the value of a Docker `imageRepository` tag value changes. Consider an example trigger. +==== [source,json] ---- @@ -260,13 +321,21 @@ The ImageChange `trigger` will result in a new deployment whenever the value of } } ---- -<1> If the `automatic` option is set to `false`, the trigger is effectively disabled. +<1> If the `*automatic*` option is set to `*false*`, the trigger is effectively +disabled. +==== -In this example, when the `latest` tag value for the `imageRepository` named `openshift/origin-ruby-sample` changes, the containers specified in `containerNames` for the `deploymentConfig` will be updated with the new tag value, and a new `deployment` will be created. +In this example, when the `*latest*` tag value for the `*imageRepository*` named +*openshift/origin-ruby-sample* changes, the containers specified in +`*containerNames*` for the `*deploymentConfig*` will be updated with the new +tag value, and a new deployment will be created. -===== Config change triggers +*_ConfigChange_ Triggers* [[configchange-triggers]] -The ConfigChange `trigger` will result in a new deployment whenever changes are detected to the `template` of the `deploymentConfig`. Suppose the REST API is used to modify an environment variable in a container within the `template`. +The `*ConfigChange*` trigger results in a new deployment whenever changes are +detected to the `*template*` of the `*deploymentConfig*`: + +==== [source,json] ---- @@ -274,9 +343,14 @@ The ConfigChange `trigger` will result in a new deployment whenever changes are "type": "ConfigChange" } ---- +==== -This `trigger` will cause a new `deployment` to be created in response to the `template` modification. +For example, if the REST API is used to modify an environment variable in a +container within the `*template*`, this trigger will cause a new deployment to +be created in response to the `*template*` modification. == Rollbacks - -Rollbacks revert an application back to a previous deployment and can be performed using the REST API or the OpenShift CLI. See the link:cli.html#deployment-rollbacks[CLI documentation] for more details. \ No newline at end of file +Rollbacks revert an application back to a previous deployment and can be +performed using the REST API or the CLI. See the +link:../cli_reference/basic_cli_operations.html#deployment-operations[CLI +Reference] for more details. diff --git a/dev_guide/executing_remote_commands.adoc b/dev_guide/executing_remote_commands.adoc index dc3cc4385a77..631c21b1c141 100644 --- a/dev_guide/executing_remote_commands.adoc +++ b/dev_guide/executing_remote_commands.adoc @@ -10,11 +10,12 @@ toc::[] == Overview -You can use the CLI to execute remote commands in a container. This allows you to run general Linux commands for routine operations in the container. +You can use the CLI to execute remote commands in a container. This allows you +to run general Linux commands for routine operations in the container. == Basic Usage Support for remote container command execution is built into -link:../using_openshift/cli.html#common-cli-operations[the CLI]: +link:../cli_reference/overview.html[the CLI]: **** `$ osc exec -p __ [-c __] __ _[ ... ]_` @@ -71,4 +72,6 @@ The client creates one stream each for stdin, stdout, and stderr. To distinguish The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the remote command execution request. -NOTE: Administrators can see the link:../architecture/remote_commands.html[Architecture] guide for more information. +NOTE: Administrators can see the +link:../architecture/additional_concepts/remote_commands.html[Architecture] +guide for more information. diff --git a/dev_guide/port_forwarding.adoc b/dev_guide/port_forwarding.adoc index e9cef82376a0..0a80d967033b 100644 --- a/dev_guide/port_forwarding.adoc +++ b/dev_guide/port_forwarding.adoc @@ -10,29 +10,30 @@ toc::[] == Overview -You can use the CLI to forward one or more local ports to a pod. This allows you to listen on a given or random port locally, and have data forwarded to and from given ports in the pod. +You can use the CLI to forward one or more local ports to a pod. This allows you +to listen on a given or random port locally, and have data forwarded to and from +given ports in the pod. == Basic Usage Support for port forwarding is built into -link:../using_openshift/cli.html#common-cli-operations[the CLI]: +link:../cli_reference/overview.html[the CLI]: **** `$ osc port-forward -p __ _[:]_ _[[:] ...]_` **** -The CLI listens on each local port specified by the user, forwarding via the link:#protocol[protocol] described below. +The CLI listens on each local port specified by the user, forwarding via the +link:#protocol[protocol] described below. Ports may be specified using the following formats: -`5000`:: - The client listens on port 5000 locally and forwards to 5000 in the +[horizontal] +`5000`:: The client listens on port 5000 locally and forwards to 5000 in the pod. -`6000:5000`:: - The client listens on port 6000 locally and forwards to 5000 +`6000:5000`:: The client listens on port 6000 locally and forwards to 5000 in +the pod. +`:5000` or `0:5000`:: The client selects a free local port and forwards to 5000 in the pod. -`:5000` or `0:5000`:: - The client selects a free local port and -forwards to 5000 in the pod. For example, to listen on ports `5000` and `6000` locally and forward data to and from ports `5000` and `6000` in the pod, run: @@ -106,4 +107,4 @@ connection is delivered back to the same stream in the client. The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the port forwarding request. -NOTE: Administrators can see the link:../architecture/port_forwarding.html[Architecture] guide for more information. +NOTE: Administrators can see the link:../architecture/additional_concepts/port_forwarding.html[Architecture] guide for more information. diff --git a/getting_started/admin_get_started/configure.adoc b/getting_started/admin_get_started/configure.adoc index a889d6bdeffb..209cc23528d6 100644 --- a/getting_started/admin_get_started/configure.adoc +++ b/getting_started/admin_get_started/configure.adoc @@ -7,4 +7,9 @@ :toc: macro :toc-title: -toc::[] \ No newline at end of file +toc::[] + +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/getting_started/admin_get_started/install.adoc b/getting_started/admin_get_started/install.adoc index af51d6daa938..6e667017d634 100644 --- a/getting_started/admin_get_started/install.adoc +++ b/getting_started/admin_get_started/install.adoc @@ -7,4 +7,9 @@ :toc: macro :toc-title: -toc::[] \ No newline at end of file +toc::[] + +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/getting_started/admin_get_started/monitor.adoc b/getting_started/admin_get_started/monitor.adoc index 451e94066bb5..52c3b852e862 100644 --- a/getting_started/admin_get_started/monitor.adoc +++ b/getting_started/admin_get_started/monitor.adoc @@ -7,4 +7,9 @@ :toc: macro :toc-title: -toc::[] \ No newline at end of file +toc::[] + +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/getting_started/admin_get_started/overview.adoc b/getting_started/admin_get_started/overview.adoc index 2bb4cea5f8da..06710d3762b8 100644 --- a/getting_started/admin_get_started/overview.adoc +++ b/getting_started/admin_get_started/overview.adoc @@ -7,4 +7,7 @@ :toc: macro :toc-title: - +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/getting_started/dev_get_started/installation.adoc b/getting_started/dev_get_started/installation.adoc index 7f263abd5d65..6d44c86039ba 100644 --- a/getting_started/dev_get_started/installation.adoc +++ b/getting_started/dev_get_started/installation.adoc @@ -37,10 +37,10 @@ NOTE: The `/tmp/openshift` directory must be created the first time. + This command: + -* starts OpenShift listening on all interfaces (https://0.0.0.0:8443), -* starts the Management Console listening on all interfaces (https://0.0.0.0:8443), -* launches an [sysitem]#etcd# server to store persistent data, and -* launches the Kubernetes system components. +- starts OpenShift listening on all interfaces (*0.0.0.0:8443*), +- starts the Management Console listening on all interfaces (*0.0.0.0:8443*), +- launches an [sysitem]#etcd# server to store persistent data, and +- launches the Kubernetes system components. . After the container is started, you can open a console inside the container: + @@ -57,10 +57,11 @@ $ sudo docker exec -it openshift-origin bash + NOTE: When running as a user other than `root`, you would also need to make the private client key readable by that user. However, this is just for example purposes; in a production environment, developers would generate their own keys and not have access to the system keys. -. You can see more about the commands available in the link:../using_openshift/cli.html[CLI] (the `osc` command) with: +. You can see more about the commands available in the +link:../../cli_reference/basic_cli_operations.html[CLI] (the `osc` command) with: + ---- -# osc help +$ osc help ---- *What's Next?* @@ -72,7 +73,9 @@ ifdef::openshift-origin[] == Downloading the Binary Red Hat periodically publishes binaries to GitHub, which you can download on the OpenShift Origin repository's https://github.com/openshift/origin/releases[Releases] page. These are Linux, Windows, or Mac OS X 64-bit binaries; note that the Mac and Windows versions are for the CLI only. -The `tar` file for each platform contains a single binary, `openshift`, which is an all-in-one OpenShift installation. The file also contains the link:../using_openshift/cli.html[CLI] (the `osc` command). +The `tar` file for each platform contains a single binary, `openshift`, which is +an all-in-one OpenShift installation. The file also contains the +link:../../cli_reference/basic_cli_operations.html[CLI] (the `osc` command). *Installing and Running an All-in-One Server* @@ -86,10 +89,10 @@ $ sudo ./openshift start + This command: + -* starts OpenShift listening on all interfaces (https://0.0.0.0:8443), -* starts the Management Console listening on all interfaces (https://0.0.0.0:8443), -* launches an [sysitem]#etcd# server to store persistent data, and -* launches the Kubernetes system components. +- starts OpenShift listening on all interfaces (*0.0.0.0:8443*), +- starts the Management Console listening on all interfaces (*0.0.0.0:8443*), +- launches an [sysitem]#etcd# server to store persistent data, and +- launches the Kubernetes system components. + The server runs in the foreground until you terminate the process. + diff --git a/getting_started/dev_get_started/setup.adoc b/getting_started/dev_get_started/setup.adoc index b4f3baf4c2b7..5eef3644d1a4 100644 --- a/getting_started/dev_get_started/setup.adoc +++ b/getting_started/dev_get_started/setup.adoc @@ -15,7 +15,12 @@ OpenShift components can be installed across multiple hosts. The following secti endif::[] ifdef::openshift-enterprise[] -OpenShift components can be installed across multiple hosts. During the Beta 1 phase, we recommend installing a link:../architecture/kubernetes_infrastructure.html#master[master] on one host, and two link:../architecture/kubernetes_infrastructure.html#node[nodes] on two separate hosts. +OpenShift components can be installed across multiple hosts. During the Beta 1 +phase, we recommend installing a +link:../../architecture/infrastructure_components/kubernetes_infrastructure.html#master[master] +on one host, and two +link:../../architecture/infrastructure_components/kubernetes_infrastructure.html#node[nodes] +on two separate hosts. endif::[] == System Requirements @@ -60,7 +65,7 @@ OpenShift will increase the security constraints on containers in later beta rel == Environment Requirements *DNS* -A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift link:../architecture/routing.html[router]. During the Beta 1 phase, this guide ensures that the router ends up on the OpenShift master host. +A wildcard for a DNS zone must ultimately resolve to the IP address of the OpenShift link:../../architecture/core_objects/routing.html[router]. During the Beta 1 phase, this guide ensures that the router ends up on the OpenShift master host. Create a wildcard DNS entry for `cloudapps`, or something similar, that has a low TTL and points to the public IP address of the master. For example: diff --git a/getting_started/dev_get_started/try_it_out.adoc b/getting_started/dev_get_started/try_it_out.adoc index 8709f8ffa590..f0d53664f6d5 100644 --- a/getting_started/dev_get_started/try_it_out.adoc +++ b/getting_started/dev_get_started/try_it_out.adoc @@ -10,10 +10,17 @@ toc::[] == Overview -After link:setup.html[setting up] and link:installation.html[installing] OpenShift, you can start creating applications on your instance by trying out the following example. Additional examples will be added to this section over time, such as creating an application using official Red Hat container images or any arbitrary Docker image. +After link:setup.html[setting up] and link:installation.html[installing] +OpenShift, you can start creating applications on your instance by trying out +the following example. Additional examples will be added to this section over +time, such as creating an application using official Red Hat container images or +any arbitrary Docker image. == Sample Application Lifecycle -To create an end-to-end application, demonstrating the full OpenShift concept chain, see the https://github.com/openshift/origin/blob/master/examples/sample-app/README.md[OpenShift 3 Application Lifecycle Sample]. +To create an end-to-end application, demonstrating the full OpenShift concept +chain, see the +https://github.com/openshift/origin/blob/master/examples/sample-app/README.md[OpenShift +3 Application Lifecycle Sample]. //// == Create an Application Using Red Hat Images diff --git a/getting_started/overview.adoc b/getting_started/overview.adoc index 2bb4cea5f8da..06710d3762b8 100644 --- a/getting_started/overview.adoc +++ b/getting_started/overview.adoc @@ -7,4 +7,7 @@ :toc: macro :toc-title: - +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/rest_api/overview.adoc b/rest_api/overview.adoc index 1a0076466f89..8d97ce35d5eb 100644 --- a/rest_api/overview.adoc +++ b/rest_api/overview.adoc @@ -4,3 +4,8 @@ :data-uri: :icons: :experimental: + +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/using_images/docker_images/overview.adoc b/using_images/docker_images/overview.adoc index 8eee75e3d325..8d97ce35d5eb 100644 --- a/using_images/docker_images/overview.adoc +++ b/using_images/docker_images/overview.adoc @@ -5,4 +5,7 @@ :icons: :experimental: - +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/using_images/overview.adoc b/using_images/overview.adoc index 8eee75e3d325..8d97ce35d5eb 100644 --- a/using_images/overview.adoc +++ b/using_images/overview.adoc @@ -5,4 +5,7 @@ :icons: :experimental: - +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/using_images/xpaas_images/a_mq.adoc b/using_images/xpaas_images/a_mq.adoc index e5f6a860f219..569b7a893660 100644 --- a/using_images/xpaas_images/a_mq.adoc +++ b/using_images/xpaas_images/a_mq.adoc @@ -7,4 +7,9 @@ :toc: macro :toc-title: -toc::[] \ No newline at end of file +toc::[] + +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/using_images/xpaas_images/eap.adoc b/using_images/xpaas_images/eap.adoc index f81f2cf902f9..d94a6ab3dfc0 100644 --- a/using_images/xpaas_images/eap.adoc +++ b/using_images/xpaas_images/eap.adoc @@ -7,4 +7,9 @@ :toc: macro :toc-title: -toc::[] \ No newline at end of file +toc::[] + +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/using_images/xpaas_images/jws.adoc b/using_images/xpaas_images/jws.adoc index eb776c3ded57..c13b8f74590a 100644 --- a/using_images/xpaas_images/jws.adoc +++ b/using_images/xpaas_images/jws.adoc @@ -7,4 +7,9 @@ :toc: macro :toc-title: -toc::[] \ No newline at end of file +toc::[] + +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/using_images/xpaas_images/overview.adoc b/using_images/xpaas_images/overview.adoc index 546a30882a75..8d97ce35d5eb 100644 --- a/using_images/xpaas_images/overview.adoc +++ b/using_images/xpaas_images/overview.adoc @@ -5,3 +5,7 @@ :icons: :experimental: +If you'd like to contribute to OpenShift documentation, see our +https://github.com/openshift/openshift-docs[source repository] and +https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[guidelines] +to get started. diff --git a/welcome/index.adoc b/welcome/index.adoc index a0dc937362b7..b1d0681968de 100644 --- a/welcome/index.adoc +++ b/welcome/index.adoc @@ -5,30 +5,39 @@ :icons: ifdef::openshift-origin[] -Welcome to OpenShift documentation. Here you will find information and resources to help you learn about OpenShift and its features. From getting started with creating your first application to using the advanced features, these resources provide information to set up and manage your OpenShift environment. +Welcome to OpenShift documentation. Here you will find information and resources +to help you learn about OpenShift and its features. From getting started with +creating your first application to using the advanced features, these resources +provide information to set up and manage your OpenShift environment. [cols="2",frame="none",grid="cols"] |=== -a|link:../v2_changes/overview.html[*What's New*] +a|link:../whats_new/overview.html[*What's New*] Describes what is new in OpenShift version 3. a|link:../architecture/overview.html[*Architecture*] -Describes the OpenShift version 3 architecture and provides information on the main components. +Describes the OpenShift version 3 architecture and provides information on the +main components. + +a|link:../getting_started/overview.html[*Getting Started*] + +These topics describe how to get started with OpenShift as a developer or a +platform administrator. -a|link:../getting_started/overview.html[*Installing OpenShift*] +a|link:../admin_guide/overview.html[*Administrator Guide*] -These topics describe how to get started and install OpenShift on a workstation. +These topics describe how to use OpenShift as an administrator. -a|link:../using_openshift/overview.html[*Using OpenShift*] +a|link:../dev_guide/overview.html[*Developer Guide*] -These topics describe how to use OpenShift. +These topics describe how to use OpenShift as a developer. -a|link:../image_writers_guide/overview.html[*Writing OpenShift Images*] +a|link:../creating_images/overview.html[*Creating Images*] -These topics describe how to develop OpenShift images. +These topics describe how to develop images for use on OpenShift. | @@ -157,4 +166,4 @@ a|[none] * link:../accessing_openshift/jboss_tools.html[Article 4] |=== //// -endif::openshift-enterprise[] \ No newline at end of file +endif::openshift-enterprise[] diff --git a/whats_new/applications.adoc b/whats_new/applications.adoc index 672452bb289b..3fb1025e104e 100644 --- a/whats_new/applications.adoc +++ b/whats_new/applications.adoc @@ -11,16 +11,46 @@ toc::[] *Applications in OpenShift v2* -Applications have always been a focal point within OpenShift. In OpenShift v2, an application was very well defined in that it consisted of one web framework and no more than one of any given cartridge type. So an application could have one PHP and one MySQL, for example, but it could not have one Ruby, one PHP, and two MySQLs. It also could not have a MySQL cartridge by itself. - -The limited scoping for applications meant that OpenShift could perform seamless linking for all components within an application using well-defined environment variables. Every web framework knew how to connect to MySQL using the `OPENSHIFT_MYSQL_DB_HOST` and `OPENSHIFT_MYSQL_DB_PORT` variables, for example. But this linking was limited to within an application and only worked within cartridges designed to work together. There was nothing to help link across application components, such as sharing a MySQL instance across two applications. +Applications have always been a focal point within OpenShift. In OpenShift v2, +an application was very well defined in that it consisted of one web framework +and no more than one of any given cartridge type. So an application could have +one PHP and one MySQL, for example, but it could not have one Ruby, one PHP, and +two MySQLs. It also could not have a MySQL cartridge by itself. + +The limited scoping for applications meant that OpenShift could perform seamless +linking for all components within an application using well-defined environment +variables. Every web framework knew how to connect to MySQL using the +`OPENSHIFT_MYSQL_DB_HOST` and `OPENSHIFT_MYSQL_DB_PORT` variables, for example. +But this linking was limited to within an application and only worked within +cartridges designed to work together. There was nothing to help link across +application components, such as sharing a MySQL instance across two +applications. *Applications in OpenShift v3* -From OpenShift v2 it was clear that solving the problems of the entire application is essential. Most other PaaSes limit themselves to web frameworks and rely on external services for the other types of components. OpenShift v3 takes the next steps by making even more application topologies possible and making existing topologies more manageable. - -The first step necessary to accomplish this is to remove "application" as a keyword since "application" can mean something different to everyone. Instead, you can have as many components as you desire, contained by a link:../architecture/openshift_model.html#project[project], flexibly linked together, and optionally labelled to provide any groupings or structure. This new model allows for a standalone MySQL instance, or one shared between JBoss components, or really any combination of components you can imagine. - -Flexible linking means you can link any two arbitrary components together. As long as one component can export environment variables and the second component consume values from those environment variables, with potential variable name transformation, you can link together any two components without having to change the images they are based on. So the best containerized implementation of your desired database and web framework can be consumed directly rather than you having to fork them both and rework them to be compatible. - -The result means you can build anything on OpenShift. And that is the problem OpenShift really aims to solve: a platform built on containers that lets you build entire applications in a repeatable lifecycle. +From OpenShift v2 it was clear that solving the problems of the entire +application is essential. Most other PaaSes limit themselves to web frameworks +and rely on external services for the other types of components. OpenShift v3 +takes the next steps by making even more application topologies possible and +making existing topologies more manageable. + +The first step necessary to accomplish this is to remove "application" as a +keyword since "application" can mean something different to everyone. Instead, +you can have as many components as you desire, contained by a +link:../architecture/core_objects/openshift_model.html#project[project], +flexibly linked together, and optionally labelled to provide any groupings or +structure. This new model allows for a standalone MySQL instance, or one shared +between JBoss components, or really any combination of components you can +imagine. + +Flexible linking means you can link any two arbitrary components together. As +long as one component can export environment variables and the second component +consume values from those environment variables, with potential variable name +transformation, you can link together any two components without having to +change the images they are based on. So the best containerized implementation of +your desired database and web framework can be consumed directly rather than you +having to fork them both and rework them to be compatible. + +The result means you can build anything on OpenShift. And that is the problem +OpenShift really aims to solve: a platform built on containers that lets you +build entire applications in a repeatable lifecycle. diff --git a/whats_new/terminology.adoc b/whats_new/terminology.adoc index 23fb21c48af7..3361d224822b 100644 --- a/whats_new/terminology.adoc +++ b/whats_new/terminology.adoc @@ -10,26 +10,49 @@ toc::[] == Overview -Because of the architectural changes in OpenShift v3, a number of core terms used in OpenShift v2 have changed to better reflect the new model. The following sections highlight some of these important changes. See the link:../architecture/openshift_model.html[OpenShift Model] topic for more detailed information on the resources in the new model. +Because of the architectural changes in OpenShift v3, a number of core terms +used in OpenShift v2 have changed to better reflect the new model. The following +sections highlight some of these important changes. See the +link:../architecture/core_objects/openshift_model.html[OpenShift Model] topic +for more detailed information on the resources in the new model. *Application* -The _application_ term or concept no longer exists in OpenShift v3. See the link:applications.html[Applications] topic for a more in-depth look at this change. +A specific _application_ term or concept no longer exists in OpenShift v3. See +the link:applications.html[Applications] topic for a more in-depth look at this +change. *Cartridge vs Image* -The easiest replacement term for _cartridge_ in OpenShift v3 is _image_. An image does more than a cartridge from a packaging perspective, providing better encapsulation and flexibility. But the cartridge concept also included logic for building, deploying, and routing which do not exist in images. In OpenShift v3, these additional needs are meet by Source-to-Image and templated configuration. +The easiest replacement term for _cartridge_ in OpenShift v3 is +link:../architecture/core_objects/openshift_model.html#image[_image_]. An image +does more than a cartridge from a packaging perspective, providing better +encapsulation and flexibility. But the cartridge concept also included logic for +building, deploying, and routing which do not exist in images. In OpenShift v3, +these additional needs are met by +link:../architecture/core_objects/builds.html#sti-build[Source-to-Image (STI)] +and link:../architecture/core_objects/openshift_model.html#template[templated +configuration]. -See the link:carts_vs_images.html[Cartridges vs Images] topic for more detailed information on these changes. +See the link:carts_vs_images.html[Cartridges vs Images] topic for more detailed +information on these changes. *Project vs Domain* -_Project_ is essentially a rename of _domain_ from OpenShift v2. Projects do have several features that are not a part of domains in OpenShift v2. +link:../architecture/core_objects/openshift_model.html#project[_Project_] is +essentially a rename of _domain_ from OpenShift v2. Projects do have several +features that are not a part of domains in OpenShift v2. *Gear vs Container* -The _gear_ and _container_ terms are interchangeable. Containers have a cleaner mapping of being 1:1 with images whereas many cartridges could be added to a single gear. With containers, the collocation concept is satisfied by pods. +The _gear_ and _container_ terms are interchangeable. Containers have a cleaner +mapping of being one-to-one with images, whereas many cartridges could be added +to a single gear. With containers, the collocation concept is satisfied by +link:../architecture/core_objects/kubernetes_model.html#pod[pods]. *Master vs Broker* -_Masters_ in OpenShift v3 do the job of the _broker_ layer in OpenShift v2. However, the MongoDB and ActiveMQ layers used by the broker in OpenShift v2 are no longer necessary because [sysitem]#etcd# is typically installed with each master. +link:../architecture/infrastructure_components/kubernetes_infrastructure.html#master[_Masters_] +in OpenShift v3 do the job of the _broker_ layer in OpenShift v2. However, the +MongoDB and ActiveMQ layers used by the broker in OpenShift v2 are no longer +necessary because [sysitem]#etcd# is typically installed with each master.