- Overview
- Creating a Deployment Configuration
- Starting a Deployment
- Viewing a Deployment
- Canceling a Deployment
- Retrying a Deployment
- Rolling Back a Deployment
- Executing Commands Inside a Container
- Viewing Deployment Logs
- Triggers
- Strategies
- Lifecycle Hooks
- Deployment Resources
- Manual Scaling
- Assigning Pods to Specific Nodes
- Running a Pod with a Different Service Account
{product-title} deployments provide fine-grained management over applications based on a user-defined template called a deployment configuration. The deployment system in response to a deployment configuration will create a replication controller to run an application. Replication controllers are created manually or in response to triggered events.
Important
|
Users should never manipulate replication controllers owned by deployment configurations. The deployment system makes sure changes to deployment configurations are propagated appropriately to replication controllers. |
Features provided by the deployment system:
-
A deployment configuration, which is a template for running applications.
-
Triggers that drive automated deployments in response to events.
-
User-customizable strategies to transition from the previous version to the new version.
-
Rollbacks to a previous version either manually or automatically in case of deployment failure.
-
Manual replication scaling and autoscaling.
The deployment configuration contains a version number that is incremented each time a new replication controller is created from that configuration. In addition, the cause of the last deployed replication controller is added to the deployment configuration.
A deployment configuration consists of the following key parts:
-
A pod template, which describes the application to be deployed.
-
The initial replica count for the replication controller.
-
A deployment strategy, which will be used to deploy the application.
-
A set of triggers, which cause replication controllers to be created automatically.
-
A set of hooks for executing custom behavior in different points during the lifecycle of a deployment.
Deployment configurations are deploymentConfig
{product-title} API resources
which can be managed with the oc
command like any other resource. The
following is an example of a deploymentConfig
resource:
kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
name: "frontend"
spec:
template: (1)
metadata:
labels:
name: "frontend"
spec:
containers:
- name: "helloworld"
image: "openshift/origin-ruby-sample"
ports:
- containerPort: 8080
protocol: "TCP"
replicas: 5 (2)
triggers:
- type: "ConfigChange" (3)
- type: "ImageChange" (4)
imageChangeParams:
automatic: true
containerNames:
- "helloworld"
from:
kind: "ImageStreamTag"
name: "origin-ruby-sample:latest"
strategy: (5)
type: "Rolling"
paused: false (6)
revisionHistoryLimit: 2 (7)
minReadySeconds: 0 (8)
-
The pod template of the
frontend
deployment configuration describes a simple Ruby application. -
There will be 5 replicas of
frontend
. -
A configuration change trigger causes a new replication controller to be created any time the pod template changes.
-
An image change trigger trigger causes a new replication controller to be created each time a new version of the
origin-ruby-sample:latest
image stream tag is available. -
The Rolling strategy is the default way of deploying your pods. May be omitted.
-
The existing running deployment will not be paused. Pod template or image changes cover the full spectrum of changes on which deployments currently run.
-
Revision history limit is the limit of old replication controllers you want to keep around for rolling back. May be omitted. If omitted, old replication controllers will not be cleaned.
-
Minimum seconds to wait (after the readiness checks succeed) for a pod to be considered available. The default value is 0.
You can start a new deployment manually using the web console, or from the CLI:
$ oc deploy --latest dc/<name>
Note
|
If a deployment is already in progress, the command will display a message and a new replication controller will not be deployed. |
To get basic information about recent deployments:
$ oc rollout history dc/<name>
This will show details about all recently created replication controllers for the provided deployment configuration, including any currently running deployment.
You can view details specific to a revision by using the --revision
flag:
$ oc rollout history dc/<name> --revision=1
For more detailed information about a deployment configuration and its latest deployment:
$ oc describe dc <name>
Note
|
The web console shows deployments in the Browse tab. |
To cancel a running or stuck deployment:
$ oc deploy --cancel dc/<name>
Warning
|
The cancellation is a best-effort operation, and may take some time to complete. The replication controller may partially or totally complete deployment before the cancellation is effective. When canceled, the deployment automatically rolls back. |
To retry the last failed deployment:
$ oc deploy --retry dc/<name>
If the last deployment did not fail, the command will display a message and the deployment will not be retried.
Note
|
Retrying a deployment restarts the deployment and does not create a new deployment revision. The restarted deployment will have the same configuration it had when it failed. |
Rollbacks revert an application back to a previous revision and can be performed using the REST API, the CLI, or the web console.
To rollback to the last successful deployed revision of your configuration:
$ oc rollout undo dc/<name>
The deployment configuration’s template will be reverted to match the deployment
revision specified in the undo command, and a new replication controller will be
started. If no revision is specified with --to-revision
, then the last
successfully deployed revision will be used.
Image change triggers on the deployment configuration are disabled as part of the rollback to prevent unwanted deployments soon after the rollback is complete. To re-enable the image change triggers:
$ oc set triggers dc/<name> --auto
Note
|
Deployment configurations also support automatically rolling back to the last successful revision of the configuration in case the latest template fails to deploy. In that case, the latest template that failed to deploy stays intact by the system and it is up to users to fix their configurations. |
You can add a command to a container, which modifies the container’s startup
behavior by overruling the image’s ENTRYPOINT
. This is different from a
lifecycle hook,
which instead can be run once per deployment at a specified time.
Add the command
parameters to the spec
field of the deployment
configuration. You can also add an args
field, which modifies the
command
(or the ENTRYPOINT
if command
does not exist).
... spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>' ...
For example, to execute the java
command with the -jar
and
/opt/app-root/springboots2idemo.jar arguments:
... spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar ...
To stream the logs of the latest revision for a given deployment configuration:
$ oc logs -f dc/<name> --follow
If the latest revision is running or failed, oc logs
will return the logs of
the process that is responsible for deploying your pods. If it is successful,
oc logs
will return the logs from a pod of your application.
You can also view logs from older failed deployments, if and only if the old deployment exists and has not been pruned or deleted manually:
$ oc logs --version=1 dc/<name>
For more options on retrieving logs see:
$ oc logs --help
A deployment configuration can contain triggers, which drive the creation of new deployments in response to events, only inside {product-title} at the moment.
Warning
|
If no triggers are defined on a deployment configuration, a |
The ConfigChange
trigger results in a new replication controller whenever
changes are detected in the pod template of the deployment configuration.
Note
|
If a |
ConfigChange
Triggertriggers:
- type: "ConfigChange"
The ImageChange
trigger results in a new replication controller whenever the
content of an image stream tag changes (when a new version of the image is
pushed).
ImageChange
Triggertriggers:
- type: "ImageChange"
imageChangeParams:
automatic: true (1)
from:
kind: "ImageStreamTag"
name: "origin-ruby-sample:latest"
containerNames:
- "helloworld"
-
If the
imageChangeParams.automatic
field is set tofalse
, the trigger is disabled.
With the above example, when the latest
tag value of the origin-ruby-sample
image stream changes and the new image value differs from the current image
specified in the deployment configuration’s helloworld container, a new
deployment is created using the new image for the helloworld container.
Note
|
If an |
A deployment strategy determines the deployment process, and is defined by the deployment configuration. Each application has different requirements for availability (and other considerations) during deployments. {product-title} provides strategies to support a variety of deployment scenarios.
A deployment strategy uses
readiness
checks to determine if a new pod is ready for use. If a readiness check fails,
the deployment configuration will retry to run the pod until it times out. The
default timeout is 10m
, a value set in TimeoutSeconds
in
dc.spec.strategy.*params
.
The Rolling strategy is the default strategy used if no strategy is specified on a deployment configuration.
The rolling strategy performs a rolling update and supports lifecycle hooks for injecting code into the deployment process.
The rolling deployment strategy waits for pods to pass their readiness check before scaling down old components, and does not allow pods that do not pass their readiness check within a configurable timeout.
The following is an example of the Rolling strategy:
strategy:
type: Rolling
rollingParams:
timeoutSeconds: 120 (1)
maxSurge: "20%" (2)
maxUnavailable: "10%" (3)
pre: {} (4)
post: {}
-
How long to wait for a scaling event before giving up. Optional; the default is 120.
-
maxSurge
is optional and defaults to25%
; see below. -
maxUnavailable
is optional and defaults to25%
; see below. -
pre
andpost
are both lifecycle hooks.
The Rolling strategy will:
-
Execute any
pre
lifecycle hook. -
Scale up the new replication controller based on the surge count.
-
Scale down the old replication controller based on the max unavailable count.
-
Repeat this scaling until the new replication controller has reached the desired replica count and the old replication controller has been scaled to zero.
-
Execute any
post
lifecycle hook.
Important
|
When scaling down, the Rolling strategy waits for pods to become ready so it can decide whether further scaling would affect availability. If scaled up pods never become ready, the deployment will eventually time out and result in a deployment failure. |
Important
|
When executing the |
The maxUnavailable
parameter is the maximum number of pods that can be
unavailable during the update. The maxSurge
parameter is the maximum number
of pods that can be scheduled above the original number of pods. Both parameters
can be set to either a percentage (e.g., 10%
) or an absolute value (e.g.,
2
). The default value for both is 25%
.
These parameters allow the deployment to be tuned for availability and speed. For example:
-
maxUnavailable=0
andmaxSurge=20%
ensures full capacity is maintained during the update and rapid scale up. -
maxUnavailable=10%
andmaxSurge=0
performs an update using no extra capacity (an in-place update). -
maxUnavailable=10%
andmaxSurge=10%
scales up and down quickly with some potential for capacity loss.
The Recreate strategy has basic rollout behavior and supports lifecycle hooks for injecting code into the deployment process.
The following is an example of the Recreate strategy:
strategy:
type: Recreate
recreateParams: (1)
pre: {} (2)
mid: {}
post: {}
-
recreateParams
are optional. -
pre
,mid
, andpost
are lifecycle hooks.
The Recreate strategy will:
-
Execute any "pre" lifecycle hook.
-
Scale down the previous deployment to zero.
-
Execute any "mid" lifecycle hook.
-
Scale up the new deployment.
-
Execute any "post" lifecycle hook.
Important
|
During scale up, if the replica count of the deployment is greater than one, the first replica of the deployment will be validated for readiness before fully scaling up the deployment. If the validation of the first replica fails, the deployment will be considered a failure. |
The Custom strategy allows you to provide your own deployment behavior.
The following is an example of the Custom strategy:
strategy:
type: Custom
customParams:
image: organization/strategy
command: [ "command", "arg1" ]
environment:
- name: ENV_1
value: VALUE_1
In the above example, the organization/strategy container image provides the
deployment behavior. The optional command
array overrides any CMD
directive specified in the image’s Dockerfile. The optional environment
variables provided are added to the execution environment of the strategy
process.
Additionally, {product-title} provides the following environment variables to the strategy process:
Environment Variable | Description |
---|---|
|
The name of the new deployment (a replication controller). |
|
The namespace of the new deployment. |
The replica count of the new deployment will initially be zero. The responsibility of the strategy is to make the new deployment active using the logic that best serves the needs of the user.
The Recreate and Rolling strategies support lifecycle hooks, which allow behavior to be injected into the deployment process at predefined points within the strategy:
The following is an example of a "pre" lifecycle hook:
pre:
failurePolicy: Abort
execNewPod: {} (1)
-
execNewPod
is a pod-based lifecycle hook.
Every hook has a failurePolicy
, which defines the action the strategy should
take when a hook failure is encountered:
|
The deployment process will be considered a failure if the hook fails. |
|
The hook execution should be retried until it succeeds. |
|
Any hook failure should be ignored and the deployment should proceed. |
Hooks have a type-specific field that describes how to execute the hook.
Currently, pod-based hooks are the only
supported hook type, specified by the execNewPod
field.
Pod-based lifecycle hooks execute hook code in a new pod derived from the template in a deployment configuration.
The following simplified example deployment configuration uses the Rolling strategy. Triggers and some other minor details are omitted for brevity:
kind: DeploymentConfig
apiVersion: v1
metadata:
name: frontend
spec:
template:
metadata:
labels:
name: frontend
spec:
containers:
- name: helloworld
image: openshift/origin-ruby-sample
replicas: 5
selector:
name: frontend
strategy:
type: Rolling
rollingParams:
pre:
failurePolicy: Abort
execNewPod:
containerName: helloworld (1)
command: [ "/usr/bin/command", "arg1", "arg2" ] (2)
env: (3)
- name: CUSTOM_VAR1
value: custom_value1
volumes:
- data (4)
-
The
helloworld
name refers tospec.template.spec.containers[0].name
. -
This
command
overrides anyENTRYPOINT
defined by theopenshift/origin-ruby-sample
image. -
env
is an optional set of environment variables for the hook container. -
volumes
is an optional set of volume references for the hook container.
In this example, the "pre" hook will be executed in a new pod using the openshift/origin-ruby-sample image from the helloworld container. The hook pod will have the following properties:
-
The hook command will be
/usr/bin/command arg1 arg2
. -
The hook container will have the
CUSTOM_VAR1=custom_value1
environment variable. -
The hook failure policy is
Abort
, meaning the deployment process will fail if the hook fails. -
The hook pod will inherit the
data
volume from the deployment configuration pod.
The oc set deployment-hook
command can be used to set the deployment hook for a
deployment configuration. For the example above, you can set the pre-deployment hook with
the following command:
$ oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 \ -v data --failure-policy=abort -- /usr/bin/command arg1 arg2
A deployment is completed by a pod that consumes resources (memory and CPU) on a node. By default, pods consume unbounded node resources. However, if a project specifies default container limits, then pods consume resources up to those limits.
You can also limit resource use by specifying resource limits as part of the deployment strategy. Deployment resources can be used with the Recreate, Rolling, or Custom deployment strategies.
In the following example, each of resources
, cpu
, and memory
is
optional:
type: "Recreate"
resources:
limits:
cpu: "100m" (1)
memory: "256Mi" (2)
-
cpu
is in CPU units:100m
represents 0.1 CPU units (100 * 1e-3). -
memory
is in bytes:256Mi
represents 268435456 bytes (256 * 2 ^ 20).
However, if a quota has been defined for your project, one of the following two items is required:
-
A
resources
section set with an explicitrequests
:type: "Recreate" resources: requests: (1) cpu: "100m" memory: "256Mi"
-
The
requests
object contains the list of resources that correspond to the list of resources in the quota.
-
Otherwise, deploy pod creation will fail, citing a failure to satisfy quota.
In addition to rollbacks, you can exercise fine-grained control over
the number of replicas from the web console, or by using the oc scale
command.
For example, the following command sets the replicas in the deployment
configuration frontend
to 3.
$ oc scale dc frontend --replicas=3
The number of replicas eventually propagates to the desired and current
state of the deployment configured by the deployment configuration frontend
.
You can use node selectors in conjunction with labeled nodes to control pod placement.
Cluster administrators for your project in order to restrict pod placement to specific nodes. As an {product-title} developer, you can set a node selector on a pod configuration to restrict nodes even further.
To add a node selector when creating a pod, edit the pod configuration, and add
the nodeSelector
value. This can be added to a single pod configuration, or in
a pod template:
apiVersion: v1 kind: Pod spec: nodeSelector: disktype: ssd ...
Pods created when the node selector is in place are assigned to nodes with the specified labels.
The labels specified here are used in conjunction with the labels
For example, if a project has the type=user-node
and
region=east
labels added to a project by the cluster administrator, and you
add the above disktype: ssd
label to a pod, the pod will only ever be
scheduled on nodes that have all three labels.
Note
|
Labels can only be set to one value, so setting a node selector of |
You can run a pod with a service account other than the default:
-
Edit the deployment configuration:
$ oc edit dc/<deployment_config>
-
Add the
serviceAccount
andserviceAccountName
parameters to thespec
field, and specify the service account you want to use:spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>