diff --git a/404.html b/404.html index bf9ee698d790..3c2f372ea862 100644 --- a/404.html +++ b/404.html @@ -1,3809 +1,11 @@ - - - + - - - - - - - - - - - - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
- -

404 - Not found

- -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/404.html.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/CONTRIBUTING/index.html b/CONTRIBUTING/index.html index dea365877dca..d36d9c316ed8 100644 --- a/CONTRIBUTING/index.html +++ b/CONTRIBUTING/index.html @@ -1,4168 +1,11 @@ - - - + - - - - - - - - - - - - Contributing - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Contributing - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
- -
- - -
-
- - - - - - - - -

Contributing

-

How To Provide Feedback

-

Please raise an issue in Github.

-

Code of Conduct

-

See CNCF Code of Conduct.

-

Community Meetings (monthly)

-

A monthly opportunity for users and maintainers of Workflows and Events to share their current work and -hear about what’s coming on the roadmap. Please join us! For Community Meeting information, minutes and recordings -please see here.

-

Contributor Meetings (twice monthly)

-

A weekly opportunity for committers and maintainers of Workflows and Events to discuss their current work and -talk about what’s next. Feel free to join us! For Contributor Meeting information, minutes and recordings -please see here.

-

How To Contribute

-

We're always looking for contributors.

-
    -
  • Documentation - something missing or unclear? Please submit a pull request!
  • -
  • Code contribution - investigate - a good first issue - , or anything not assigned.
  • -
  • You can work on an issue without being assigned.
  • -
  • Join the #argo-contributors channel on our Slack.
  • -
-

Running Locally

-

To run Argo Workflows locally for development: running locally.

-

Committing

-

See the Committing Guidelines.

-

Dependencies

-

Dependencies increase the risk of security issues and have on-going maintenance costs.

-

The dependency must pass these test:

-
    -
  • A strong use case.
  • -
  • It has an acceptable license (e.g. MIT).
  • -
  • It is actively maintained.
  • -
  • It has no security issues.
  • -
-

Example, should we add fasttemplate -, view the Snyk report:

- - - - - - - - - - - - - - - - - - - - - - - - - -
TestOutcome
A strong use case.❌ Fail. We can use text/template.
It has an acceptable license (e.g. MIT)✅ Pass. MIT license.
It is actively maintained.❌ Fail. Project is inactive.
It has no security issues.✅ Pass. No known security issues.
-

No, we should not add that dependency.

-

Test Policy

-

Changes without either unit or e2e tests are unlikely to be accepted. -See the pull request template.

-

Contributor Workshop

-

Please check out the following resources if you are interested in contributing:

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/CONTRIBUTING/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/access-token/index.html b/access-token/index.html index 63bde8bdd7ab..448e94f18b3f 100644 --- a/access-token/index.html +++ b/access-token/index.html @@ -1,4161 +1,11 @@ - - - + - - - - - - - - - - - - Access Token - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Access Token - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
- -
- - -
-
- - - - - - - - -

Access Token

-

Overview

-

If you want to automate tasks with the Argo Server API or CLI, you will need an access token.

-

Prerequisites

-

Firstly, create a role with minimal permissions. This example role for jenkins only permission to update and list workflows:

-
kubectl create role jenkins --verb=list,update --resource=workflows.argoproj.io
-
-

Create a service account for your service:

-
kubectl create sa jenkins
-
-

Tip for Tokens Creation

-

Create a unique service account for each client:

-
    -
  • (a) you'll be able to correctly secure your workflows
  • -
  • (b) revoke the token without impacting other clients.
  • -
-

Bind the service account to the role (in this case in the argo namespace):

-
kubectl create rolebinding jenkins --role=jenkins --serviceaccount=argo:jenkins
-
-

Token Creation

-

You now need to create a secret to hold your token:

-
    kubectl apply -f - <<EOF
-apiVersion: v1
-kind: Secret
-metadata:
-  name: jenkins.service-account-token
-  annotations:
-    kubernetes.io/service-account.name: jenkins
-type: kubernetes.io/service-account-token
-EOF
-
-

Wait a few seconds:

-
ARGO_TOKEN="Bearer $(kubectl get secret jenkins.service-account-token -o=jsonpath='{.data.token}' | base64 --decode)"
-echo $ARGO_TOKEN
-Bearer ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkltS...
-
-

Token Usage & Test

-

To use that token with the CLI you need to set ARGO_SERVER (see argo --help).

-

Use that token in your API requests, e.g. to list workflows:

-
curl https://localhost:2746/api/v1/workflows/argo -H "Authorization: $ARGO_TOKEN"
-# 200 OK
-
-

You should check you cannot do things you're not allowed!

-
curl https://localhost:2746/api/v1/workflow-templates/argo -H "Authorization: $ARGO_TOKEN"
-# 403 error
-
-

Token Usage - Docker

-

Set additional params to initialize Argo settings

-
ARGO_SERVER="${{HOST}}:443"
-KUBECONFIG=/dev/null
-ARGO_NAMESPACE=sandbox
-
-

Start container with settings above

-

Example for listing templates in a namespace:

-
docker run --rm -it \
-  -e ARGO_SERVER=$ARGO_SERVER \
-  -e ARGO_TOKEN=$ARGO_TOKEN \
-  -e ARGO_HTTP=false \
-  -e ARGO_HTTP1=true \
-  -e KUBECONFIG=/dev/null \
-  -e ARGO_NAMESPACE=$ARGO_NAMESPACE  \
-  argoproj/argocli:latest template list -v -e -k
-
-

Token Revocation

-

Token compromised?

-
kubectl delete secret $SECRET
-
-

A new one will be created.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/access-token/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/architecture/index.html b/architecture/index.html index 18fbf4e8f89e..86a1b1648f9e 100644 --- a/architecture/index.html +++ b/architecture/index.html @@ -1,4023 +1,11 @@ - - - + - - - - - - - - - - - - Architecture - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Architecture - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Architecture

-

Diagram

-

The following diagram shows the components of the Argo Workflows architecture. There are two Deployments: Workflow Controller and Argo Server. The former does all of the reconciling, and the latter serves the API. Note that the Controller can be used stand alone.

-

The reconciliation code for the WorkflowController can be found in workflow/controller/controller.go. The Argo Server opens up an HTTP(S) listener at server/apiserver/argoserver.go.

-

diagram

-
-

Argo Workflow Overview

-

The diagram below provides a little more detail as far as namespaces. The Workflow Controller and Argo Server both run in the argo namespace. Assuming Argo Workflows was installed as a Cluster Install or as a Managed Namespace Install (described here), the Workflows and the Pods generated from them run in a separate namespace.

-

The internals of a Pod are also shown. Each Step and each DAG Task cause a Pod to be generated, and each of these is composed of 3 containers:

-
    -
  • main container runs the Image that the user indicated, where the argoexec utility is volume mounted and serves as the main command which calls the configured Command as a sub-process
  • -
  • init container is an InitContainer, fetching artifacts and parameters and making them available to the main container
  • -
  • wait container performs tasks that are needed for clean up, including saving off parameters and artifacts
  • -
-

Look in cmd/argoexec for this code.

-

diagram

-
-

Workflow controller architecture

-

The following diagram shows the process for reconciliation, whereby a set of worker goroutines process the Workflows which have been added to a Workflow queue based on adds and updates to Workflows and Workflow Pods. Note that in addition to the Informers shown, there are Informers for the other CRDs that Argo Workflows uses as well. You can find this code in workflow/controller/controller.go. Note that the controller only ever processes a single Workflow at a time.

-

diagram

-
-

Various configurations for Argo UI and Argo Server

-

The top diagram below shows what happens if you run "make start UI=true" locally (recommended if you need the UI during local development). This runs a React application (Webpack HTTP server) locally which serves the index.html and typescript files from port 8080. From the typescript code there are calls made to the back end API (Argo Server) at port 2746. The Webpack HTTP server is configured for hot reload, meaning the UI will update automatically based on local code changes.

-

The second diagram is an alternative approach for rare occasions that the React files are broken and you're doing local development. In this case, everything is served from the Argo Server at port 2746.

-

The third diagram shows how things are configured for a Kubernetes environment. It is similar to the second diagram in that the Argo Server hosts everything for the UI.

-

diagram

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/architecture/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/argo-server-auth-mode/index.html b/argo-server-auth-mode/index.html index b99d77aee97b..dab5021db1da 100644 --- a/argo-server-auth-mode/index.html +++ b/argo-server-auth-mode/index.html @@ -1,3922 +1,11 @@ - - - + - - - - - - - - - - - - Argo Server Auth Mode - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Argo Server Auth Mode - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Argo Server Auth Mode

-

You can choose which kube config the Argo Server uses:

-
    -
  • server - in hosted mode, use the kube config of service account, in local mode, use your local kube config.
  • -
  • client - requires clients to provide their Kubernetes bearer token and use that.
  • -
  • sso - since v2.9, use single sign-on, this will use the same service account as per "server" for RBAC. We expect to change this in the future so that the OAuth claims are mapped to service accounts.
  • -
-

The server used to start with auth mode of "server" by default, but since v3.0 it defaults to the "client".

-

To change the server auth mode specify the list as multiple auth-mode flags:

-
argo server --auth-mode=sso --auth-mode=...
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/argo-server-auth-mode/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/argo-server-sso-argocd/index.html b/argo-server-sso-argocd/index.html index 7350cb8f731a..9154664ef5d4 100644 --- a/argo-server-sso-argocd/index.html +++ b/argo-server-sso-argocd/index.html @@ -1,4107 +1,11 @@ - - - + - - - - - - - - - - - - Use Argo CD Dex for authentication - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Use Argo CD Dex for authentication - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Use Argo CD Dex for authentication

-

It is possible to have the Argo Workflows Server use the Argo CD Dex instance for authentication, for instance if you use Okta with SAML which cannot integrate with Argo Workflows directly. In order to make this happen, you will need the following:

-
    -
  • You must be using at least Dex v2.35.0, because that's when staticClients[].secretEnv was added. That means Argo CD 1.7.12 and above.
  • -
  • A secret containing two keys, client-id and client-secret to be used by both Dex and Argo Workflows Server. client-id is argo-workflows-sso in this example, client-secret can be any random string. If Argo CD and Argo Workflows are installed in different namespaces the secret must be present in both of them. Example:
  • -
-
apiVersion: v1
-kind: Secret
-metadata:
-  name: argo-workflows-sso
-data:
-  # client-id is 'argo-workflows-sso'
-  client-id: YXJnby13b3JrZmxvd3Mtc3Nv
-  # client-secret is 'MY-SECRET-STRING-CAN-BE-UUID'
-  client-secret: TVktU0VDUkVULVNUUklORy1DQU4tQkUtVVVJRA==
-
-
    -
  • --auth-mode=sso server argument added
  • -
  • A Dex staticClients configured for argo-workflows-sso
  • -
  • The sso configuration filled out in Argo Workflows Server to match
  • -
-

Example manifests for authenticating against Argo CD's Dex (Kustomize)

-

In Argo CD, add an environment variable to Dex deployment and configuration:

-
---
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: argocd-dex-server
-spec:
-  template:
-    spec:
-      containers:
-        - name: dex
-          env:
-            - name: ARGO_WORKFLOWS_SSO_CLIENT_SECRET
-              valueFrom:
-                secretKeyRef:
-                  name: argo-workflows-sso
-                  key: client-secret
----
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: argocd-cm
-data:
-  # Kustomize sees the value of dex.config as a single string instead of yaml. It will not merge
-  # Dex settings, but instead it will replace the entire configuration with the settings below,
-  # so add these to the existing config instead of setting them in a separate file
-  dex.config: |
-    # Setting staticClients allows Argo Workflows to use Argo CD's Dex installation for authentication
-    staticClients:
-      - id: argo-workflows-sso
-        name: Argo Workflow
-        redirectURIs:
-          - https://argo-workflows.mydomain.com/oauth2/callback
-        secretEnv: ARGO_WORKFLOWS_SSO_CLIENT_SECRET
-
-

Note that the id field of staticClients must match the client-id.

-

In Argo Workflows add --auth-mode=sso argument to argo-server deployment.

-
---
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: argo-server
-spec:
-  template:
-    spec:
-      containers:
-        - name: argo-server
-          args:
-            - server
-            - --auth-mode=sso
----
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:
-  # SSO Configuration for the Argo server.
-  # You must also start argo server with `--auth-mode sso`.
-  # https://argoproj.github.io/argo-workflows/argo-server-auth-mode/
-  sso: |
-    # This is the root URL of the OIDC provider (required).
-    issuer: https://argo-cd.mydomain.com/api/dex
-    # This is name of the secret and the key in it that contain OIDC client
-    # ID issued to the application by the provider (required).
-    clientId:
-      name: argo-workflows-sso
-      key: client-id
-    # This is name of the secret and the key in it that contain OIDC client
-    # secret issued to the application by the provider (required).
-    clientSecret:
-      name: argo-workflows-sso
-      key: client-secret
-    # This is the redirect URL supplied to the provider (required). It must
-    # be in the form <argo-server-root-url>/oauth2/callback. It must be
-    # browser-accessible.
-    redirectUrl: https://argo-workflows.mydomain.com/oauth2/callback
-
-

Example Helm chart configuration for authenticating against Argo CD's Dex

-

argo-cd/values.yaml:

-
     dex:
-       image:
-         tag: v2.35.0
-       env:
-         - name: ARGO_WORKFLOWS_SSO_CLIENT_SECRET
-           valueFrom:
-             secretKeyRef:
-               name: argo-workflows-sso
-               key: client-secret
-     server:
-       config:
-         dex.config: |
-           staticClients:
-           - id: argo-workflows-sso
-             name: Argo Workflow
-             redirectURIs:
-               - https://argo-workflows.mydomain.com/oauth2/callback
-             secretEnv: ARGO_WORKFLOWS_SSO_CLIENT_SECRET
-
-

argo-workflows/values.yaml:

-
     server:
-       extraArgs:
-         - --auth-mode=sso
-       sso:
-         issuer: https://argo-cd.mydomain.com/api/dex
-         # sessionExpiry defines how long your login is valid for in hours. (optional, default: 10h)
-         sessionExpiry: 240h
-         clientId:
-           name: argo-workflows-sso
-           key: client-id
-         clientSecret:
-           name: argo-workflows-sso
-           key: client-secret
-         redirectUrl: https://argo-workflows.mydomain.com/oauth2/callback
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/argo-server-sso-argocd/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/argo-server-sso/index.html b/argo-server-sso/index.html index 94c8b2a55665..16543b171fea 100644 --- a/argo-server-sso/index.html +++ b/argo-server-sso/index.html @@ -1,4256 +1,11 @@ - - - + - - - - - - - - - - - - Argo Server SSO - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Argo Server SSO - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Argo Server SSO

-
-

v2.9 and after

-
-

It is possible to use Dex for authentication. This document describes how to set up Argo Workflows and Argo CD so that Argo Workflows uses Argo CD's Dex server for authentication.

-

To start Argo Server with SSO

-

Firstly, configure the settings workflow-controller-configmap.yaml with the correct OAuth 2 values. If working towards an OIDC configuration the Argo CD project has guides on its similar (though different) process for setting up OIDC providers. It also includes examples for specific providers. The main difference is that the Argo CD docs mention that their callback address endpoint is /auth/callback. For Argo Workflows, the default format is /oauth2/callback as shown in this comment in the default values.yaml file in the helm chart.

-

Next, create the Kubernetes secrets for holding the OAuth2 client-id and client-secret. You may refer to the kubernetes documentation on Managing secrets. For example by using kubectl with literals:

-
kubectl create secret -n argo generic client-id-secret \
-  --from-literal=client-id-key=foo
-
-kubectl create secret -n argo generic client-secret-secret \
-  --from-literal=client-secret-key=bar
-
-

Then, start the Argo Server using the SSO auth mode:

-
argo server --auth-mode sso --auth-mode ...
-
-

Token Revocation

-
-

v2.12 and after

-
-

As of v2.12 we issue a JWE token for users rather than give them the ID token from your OAuth2 provider. This token is opaque and has a longer expiry time (10h by default).

-

The token encryption key is automatically generated by the Argo Server and stored in a Kubernetes secret name sso.

-

You can revoke all tokens by deleting the encryption key and restarting the Argo Server (so it generates a new key).

-
kubectl delete secret sso
-
-
-

Warning

-

The old key will be in the memory the any running Argo Server, and they will therefore accept and user with token encrypted using the old key. Every Argo Server MUST be restarted.

-
-

All users will need to log in again. Sorry.

-

SSO RBAC

-
-

v2.12 and after

-
-

You can optionally add RBAC to SSO. This allows you to give different users different access levels. Except for client auth mode, all users of the Argo Server must ultimately use a service account. So we allow you to define rules that map a user (maybe using their OIDC groups) to a service account in the same namespace as argo server by annotating the service account.

-

To allow service accounts to manage resources in other namespaces create a role and role binding in the target namespace.

-

RBAC config is installation-level, so any changes will need to be made by the team that installed Argo. Many complex rules will be burdensome on that team.

-

Firstly, enable the rbac: setting in workflow-controller-configmap.yaml. You likely want to configure RBAC using groups, so add scopes: to the SSO settings:

-
sso:
-  # ...
-  scopes:
-   - groups
-  rbac:
-    enabled: true
-
-
-

Note

-

Not all OIDC providers support the groups scope. Please speak to your provider about their options.

-
-

To configure a service account to be used, annotate it:

-
apiVersion: v1
-kind: ServiceAccount
-metadata:
-  name: admin-user
-  annotations:
-    # The rule is an expression used to determine if this service account
-    # should be used.
-    # * `groups` - an array of the OIDC groups
-    # * `iss` - the issuer ("argo-server")
-    # * `sub` - the subject (typically the username)
-    # Must evaluate to a boolean.
-    # If you want an account to be the default to use, this rule can be "true".
-    # Details of the expression language are available in
-    # https://github.com/antonmedv/expr/blob/master/docs/Language-Definition.md.
-    workflows.argoproj.io/rbac-rule: "'admin' in groups"
-    # The precedence is used to determine which service account to use whe
-    # Precedence is an integer. It may be negative. If omitted, it defaults to "0".
-    # Numerically higher values have higher precedence (not lower, which maybe
-    # counter-intuitive to you).
-    # If two rules match and have the same precedence, then which one used will
-    # be arbitrary.
-    workflows.argoproj.io/rbac-rule-precedence: "1"
-
-

If no rule matches, we deny the user access.

-

Tip: You'll probably want to configure a default account to use if no other rule matches, e.g. a read-only account, you can do this as follows:

-
metadata:
-  name: read-only
-  annotations:
-    workflows.argoproj.io/rbac-rule: "true"
-    workflows.argoproj.io/rbac-rule-precedence: "0"
-
-

The precedence must be the lowest of all your service accounts.

-

As of Kubernetes v1.24, secrets for a service account token are no longer automatically created. -Therefore, service account secrets for SSO RBAC must be created manually. -See Manually create secrets for detailed instructions.

-

SSO RBAC Namespace Delegation

-
-

v3.3 and after

-
-

You can optionally configure RBAC SSO per namespace. -Typically, on organization has a Kubernetes cluster and a central team (the owner of the cluster) manages the cluster. Along with this, there are multiple namespaces which are owned by individual teams. This feature would help namespace owners to define RBAC for their own namespace.

-

The feature is currently in beta. -To enable the feature, set env variable SSO_DELEGATE_RBAC_TO_NAMESPACE=true in your argo-server deployment.

- -

Configure a default account in the installation namespace that allows access to all users of your organization. This service account allows a user to login to the cluster. You could optionally add a workflow read-only role and role-binding.

-
apiVersion: v1
-kind: ServiceAccount
-metadata:
-  name: user-default-login
-  annotations:
-    workflows.argoproj.io/rbac-rule: "true"
-    workflows.argoproj.io/rbac-rule-precedence: "0"
-
-
-

Note

-

All users MUST map to a cluster service account (such as the one above) before a namespace service account can apply.

-
-

Now, for the namespace that you own, configure a service account that allows members of your team to perform operations in your namespace. -Make sure that the precedence of the namespace service account is higher than the precedence of the login service account. Create an appropriate role for this service account and bind it with a role-binding.

-
apiVersion: v1
-kind: ServiceAccount
-metadata:
-  name: my-namespace-read-write-user
-  namespace: my-namespace
-  annotations:
-    workflows.argoproj.io/rbac-rule: "'my-team' in groups"
-    workflows.argoproj.io/rbac-rule-precedence: "1"
-
-

With this configuration, when a user is logged in via SSO, makes a request in my-namespace, and the rbac-rule matches, this service account allows the user to perform that operation. If no service account matches in the namespace, the first service account (user-default-login) and its associated role will be used to perform the operation.

-

SSO Login Time

-
-

v2.12 and after

-
-

By default, your SSO session will expire after 10 hours. You can change this by adding a sessionExpiry to your workflow-controller-configmap.yaml under the SSO heading.

-
sso:
-  # Expiry defines how long your login is valid for in hours. (optional)
-  sessionExpiry: 240h
-
-

Custom claims

-
-

v3.1.4 and after

-
-

If your OIDC provider provides groups information with a claim name other than groups, you could configure config-map to specify custom claim name for groups. Argo now arbitrary custom claims and any claim can be used for expr eval. However, since group information is displayed in UI, it still needs to be an array of strings with group names as elements.

-

The customClaim in this case will be mapped to groups key and we can use the same key groups for evaluating our expressions

-
sso:
-  # Specify custom claim name for OIDC groups.
-  customGroupClaimName: argo_groups
-
-

If your OIDC provider provides groups information only using the user-info endpoint (e.g. Okta), you could configure userInfoPath to specify the user info endpoint that contains the groups claim.

-
sso:
-  userInfoPath: /oauth2/v1/userinfo
-
-

Example Expression

-
# assuming customClaimGroupName: argo_groups
-workflows.argoproj.io/rbac-rule: "'argo_admins' in groups"
-
-

Filtering groups

-
-

v3.5 and above

-
-

You can configure filterGroupsRegex to filter the groups returned by the OIDC provider. Some use-cases for this include:

-
    -
  • You have multiple applications using the same OIDC provider, and you only want to use groups that are relevant to Argo Workflows.
  • -
  • You have many groups and exceed the 4KB cookie size limit (cookies are used to store authentication tokens). If this occurs, login will fail.
  • -
-
sso:
-    # Specify a list of regular expressions to filter the groups returned by the OIDC provider.
-    # A logical "OR" is used between each regex in the list
-    filterGroupsRegex:
-    - ".*argo-wf.*"
-    - ".*argo-workflow.*"
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/argo-server-sso/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/argo-server/index.html b/argo-server/index.html index f1170a731568..51190e1fbdb6 100644 --- a/argo-server/index.html +++ b/argo-server/index.html @@ -1,4331 +1,11 @@ - - - + - - - - - - - - - - - - Argo Server - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Argo Server - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Argo Server

-
-

v2.5 and after

-
-
-

HTTP vs HTTPS

-

Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP.

-
-

The Argo Server is a server that exposes an API and UI for workflows. You'll need to use this if you want to offload large workflows or the workflow archive.

-

You can run this in either "hosted" or "local" mode.

-

It replaces the Argo UI.

-

Hosted Mode

-

Use this mode if:

-
    -
  • You want a drop-in replacement for the Argo UI.
  • -
  • If you need to prevent users from directly accessing the database.
  • -
-

Hosted mode is provided as part of the standard manifests, specifically in argo-server-deployment.yaml .

-

Local Mode

-

Use this mode if:

-
    -
  • You want something that does not require complex set-up.
  • -
  • You do not need to run a database.
  • -
-

To run locally:

-
argo server
-
-

This will start a server on port 2746 which you can view.

-

Options

-

Auth Mode

-

See auth.

-

Managed Namespace

-

See managed namespace.

-

Base HREF

-

If the server is running behind reverse proxy with a sub-path different from / (for example, -/argo), you can set an alternative sub-path with the --basehref flag or the BASE_HREF -environment variable.

-

You probably now should read how to set-up an ingress

-

Transport Layer Security

-

See TLS.

-

SSO

-

See SSO. See here about sharing Argo CD's Dex with Argo Workflows.

-

Access the Argo Workflows UI

-

By default, the Argo UI service is not exposed with an external IP. To access the UI, use one of the -following:

-

kubectl port-forward

-
kubectl -n argo port-forward svc/argo-server 2746:2746
-
-

Then visit: https://localhost:2746

-

Expose a LoadBalancer

-

Update the service to be of type LoadBalancer.

-
kubectl patch svc argo-server -n argo -p '{"spec": {"type": "LoadBalancer"}}'
-
-

Then wait for the external IP to be made available:

-
kubectl get svc argo-server -n argo
-
-
NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
-argo-server   LoadBalancer   10.43.43.130   172.18.0.2    2746:30008/TCP   18h
-
-

Ingress

-

You can get ingress working as follows:

-

Add BASE_HREF as environment variable to deployment/argo-server. Do not forget to add a trailing '/' character.

-
---
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: argo-server
-spec:
-  selector:
-    matchLabels:
-      app: argo-server
-  template:
-    metadata:
-      labels:
-        app: argo-server
-    spec:
-      containers:
-      - args:
-        - server
-        env:
-          - name: BASE_HREF
-            value: /argo/
-        image: argoproj/argocli:latest
-        name: argo-server
-...
-
-

Create a ingress, with the annotation ingress.kubernetes.io/rewrite-target: /:

-
-

If TLS is enabled (default in v3.0 and after), the ingress controller must be told -that the backend uses HTTPS. The method depends on the ingress controller, e.g. -Traefik expects an ingress.kubernetes.io/protocol annotation, while ingress-nginx -uses nginx.ingress.kubernetes.io/backend-protocol

-
-
apiVersion: networking.k8s.io/v1beta1
-kind: Ingress
-metadata:
-  name: argo-server
-  annotations:
-    ingress.kubernetes.io/rewrite-target: /$2
-    ingress.kubernetes.io/protocol: https # Traefik
-    nginx.ingress.kubernetes.io/backend-protocol: https # ingress-nginx
-spec:
-  rules:
-    - http:
-        paths:
-          - backend:
-              serviceName: argo-server
-              servicePort: 2746
-            path: /argo(/|$)(.*)
-
-

Learn more

-

Security

-

Users should consider the following in their set-up of the Argo Server:

-

API Authentication Rate Limiting

-

Argo Server does not perform authentication directly. It delegates this to either the Kubernetes API Server (when --auth-mode=client) and the OAuth provider (when --auth-mode=sso). In each case, it is recommended that the delegate implements any authentication rate limiting you need.

-

IP Address Logging

-

Argo Server does not log the IP addresses of API requests. We recommend you put the Argo Server behind a load balancer, and that load balancer is configured to log the IP addresses of requests that return authentication or authorization errors.

-

Rate Limiting

-
-

v3.4 and after

-
-

Argo Server by default rate limits to 1000 per IP per minute, you can configure it through --api-rate-limit. You can access additional information through the following headers.

-
    -
  • X-Rate-Limit-Limit - the rate limit ceiling that is applicable for the current request.
  • -
  • X-Rate-Limit-Remaining - the number of requests left for the current rate-limit window.
  • -
  • X-Rate-Limit-Reset - the time at which the rate limit resets, specified in UTC time.
  • -
  • Retry-After - indicate when a client should retry requests (when the rate limit expires), in UTC time.
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/argo-server/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/artifact-repository-ref/index.html b/artifact-repository-ref/index.html index 4031d911e2a6..d684842c31aa 100644 --- a/artifact-repository-ref/index.html +++ b/artifact-repository-ref/index.html @@ -1,3950 +1,11 @@ - - - + - - - - - - - - - - - - Artifact Repository Ref - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Artifact Repository Ref - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Artifact Repository Ref

-
-

v2.9 and after

-
-

You can reduce duplication in your templates by configuring repositories that can be accessed by any workflow. This can also remove sensitive information from your templates.

-

Create a suitable config map in either (a) your workflows namespace or (b) in the managed namespace:

-
apiVersion: v1
-kind: ConfigMap
-metadata:
-  # If you want to use this config map by default, name it "artifact-repositories". Otherwise, you can provide a reference to a
-  # different config map in `artifactRepositoryRef.configMap`.
-  name: my-artifact-repository
-  annotations:
-    # v3.0 and after - if you want to use a specific key, put that key into this annotation.
-    workflows.argoproj.io/default-artifact-repository: default-v1-s3-artifact-repository
-data:
-  default-v1-s3-artifact-repository: |
-    s3:
-      bucket: my-bucket
-      endpoint: minio:9000
-      insecure: true
-      accessKeySecret:
-        name: my-minio-cred
-        key: accesskey
-      secretKeySecret:
-        name: my-minio-cred
-        key: secretkey
-  v2-s3-artifact-repository: |
-    s3:
-      ...
-
-

You can override the artifact repository for a workflow as follows:

-
spec:
-  artifactRepositoryRef:
-    configMap: my-artifact-repository # default is "artifact-repositories"
-    key: v2-s3-artifact-repository # default can be set by the `workflows.argoproj.io/default-artifact-repository` annotation in config map.
-
-

This feature gives maximum benefit when used with key-only artifacts.

-

Reference.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/artifact-repository-ref/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/artifact-visualization/index.html b/artifact-visualization/index.html index ac6b7c807762..d4ca1d61ffdd 100644 --- a/artifact-visualization/index.html +++ b/artifact-visualization/index.html @@ -1,4131 +1,11 @@ - - - + - - - - - - - - - - - - Artifact Visualization - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Artifact Visualization - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Artifact Visualization

-
-

since v3.4

-
-

Artifacts can be viewed in the UI.

-

Use cases:

-
    -
  • Comparing ML pipeline runs from generated charts.
  • -
  • Visualizing end results of ML pipeline runs.
  • -
  • Debugging workflows where visual artifacts are the most helpful.
  • -
-

Demo

-
    -
  • Artifacts appear as elements in the workflow DAG that you can click on.
  • -
  • When you click on the artifact, a panel appears.
  • -
  • The first time this appears explanatory text is shown to help you understand if you might need to change your - workflows to use this feature.
  • -
  • Known file types such as images, text or HTML are shown in an inline-frame (iframe).
  • -
  • Artifacts are sandboxed using a Content-Security-Policy that prevents JavaScript execution.
  • -
  • JSON is shown with syntax highlighting.
  • -
-

To start, take a look at the example.

-

Graph Report -Test Report

-

Artifact Types

-

An artifact maybe a .tgz, file or directory.

-

.tgz

-

Viewing of .tgz is not supported in the UI. By default artifacts are compressed as a .tgz. Only artifacts that were -not compressed can be viewed.

-

To prevent compression, set archive to none to prevent compression:

-
- name: artifact
-  # ...
-  archive:
-    none: { }
-
-

File

-

Files maybe shown in the UI. To determine if a file can be shown, the UI checks if the artifact's file extension is -supported. The extension is found in the artifact's key.

-

To view a file, add the extension to the key:

-
- name: single-file
-  s3:
-    key: visualization.png
-
-

Directory

-

Directories are shown in the UI. The UI considers any key with a trailing-slash to be a directory.

-

To view a directory, add a trailing-slash:

-
- name: reports
-  s3:
-    key: reports/
-
-

If the directory contains index.html, then that will be shown, otherwise a directory listing is displayed.

-

⚠️ HTML files may contain CSS and images served from the same origin. Scripts are not allowed. Nothing may be remotely -loaded.

-

Security

-

Content Security Policy

-

We assume that artifacts are not trusted, so by default, artifacts are served with a Content-Security-Policy that -disables JavaScript and remote files.

-

This is similar to what happens when you include third-party scripts, such as analytic tracking, in your website. -However, those tracking codes are normally served from a different domain to your main website. Artifacts are served -from the same origin, so normal browser controls are not secure enough.

-

Sub-Path Access

-

Previously, users could access the artifacts of any workflows they could access. To allow HTML files to link to other files -within their tree, you can now access any sub-paths of the artifact's key.

-

Example:

-

The artifact produces a folder in an S3 bucket named my-bucket, with a key report/. You can also access anything -matching report/*.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/artifact-visualization/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/async-pattern/index.html b/async-pattern/index.html index b3e8d19bb644..bbeb09518dd8 100644 --- a/async-pattern/index.html +++ b/async-pattern/index.html @@ -1,4070 +1,11 @@ - - - + - - - - - - - - - - - - Asynchronous Job Pattern - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Asynchronous Job Pattern - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Asynchronous Job Pattern

-

Introduction

-

If triggering an external job (e.g. an Amazon EMR job) from Argo that does not run to completion in a container, there are two options:

-
    -
  • create a container that polls the external job completion status
  • -
  • combine a trigger step that starts the job with a suspend step that is resumed by an API call to Argo when the external job is complete.
  • -
-

This document describes the second option in more detail.

-

The pattern

-

The pattern involves two steps - the first step is a short-running step that triggers a long-running job outside Argo (e.g. an HTTP submission), and the second step is a suspend step that suspends workflow execution and is ultimately either resumed or stopped (i.e. failed) via a call to the Argo API when the job outside Argo succeeds or fails.

-

When implemented as a WorkflowTemplate it can look something like this:

-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: external-job-template
-spec:
-  entrypoint: run-external-job
-  arguments:
-    parameters:
-      - name: "job-cmd"
-  templates:
-    - name: run-external-job
-      inputs:
-        parameters:
-          - name: "job-cmd"
-            value: "{{workflow.parameters.job-cmd}}"
-      steps:
-        - - name: trigger-job
-            template: trigger-job
-            arguments:
-              parameters:
-                - name: "job-cmd"
-                  value: "{{inputs.parameters.job-cmd}}"
-        - - name: wait-completion
-            template: wait-completion
-            arguments:
-              parameters:
-                - name: uuid
-                  value: "{{steps.trigger-job.outputs.result}}"
-
-    - name: trigger-job
-      inputs:
-        parameters:
-          - name: "job-cmd"
-      container:
-        image: appropriate/curl:latest
-        command: [ "/bin/sh", "-c" ]
-        args: [ "{{inputs.parameters.job-cmd}}" ]
-
-    - name: wait-completion
-      inputs:
-        parameters:
-          - name: uuid
-      suspend: { }
-
-

In this case the job-cmd parameter can be a command that makes an HTTP call via curl to an endpoint that returns a job UUID. More sophisticated submission and parsing of submission output could be done with something like a Python script step.

-

On job completion the external job would need to call either resume if successful:

-

You may need an access token.

-
curl --request PUT \
-  --url https://localhost:2746/api/v1/workflows/<NAMESPACE>/<WORKFLOWNAME>/resume
-  --header 'content-type: application/json' \
-  --header "Authorization: $ARGO_TOKEN" \
-  --data '{
-      "namespace": "<NAMESPACE>",
-      "name": "<WORKFLOWNAME>",
-      "nodeFieldSelector": "inputs.parameters.uuid.value=<UUID>"
-    }'
-
-

or stop if unsuccessful:

-
curl --request PUT \
-  --url https://localhost:2746/api/v1/workflows/<NAMESPACE>/<WORKFLOWNAME>/stop
-  --header 'content-type: application/json' \
-  --header "Authorization: $ARGO_TOKEN" \
-  --data '{
-      "namespace": "<NAMESPACE>",
-      "name": "<WORKFLOWNAME>",
-      "nodeFieldSelector": "inputs.parameters.uuid.value=<UUID>",
-      "message": "<FAILURE-MESSAGE>"
-    }'
-
-

Retrying failed jobs

-

Using argo retry on failed jobs that follow this pattern will cause Argo to re-attempt the suspend step without re-triggering the job.

-

Instead you need to use the --restart-successful option, e.g. if using the template from above:

-
argo retry <WORKFLOWNAME> --restart-successful --node-field-selector templateRef.template=run-external-job,phase=Failed
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/async-pattern/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo/index.html b/cli/argo/index.html index a5d58ad09ec5..0f6f62750349 100644 --- a/cli/argo/index.html +++ b/cli/argo/index.html @@ -1,4172 +1,11 @@ - - - + - - - - - - - - - - - - argo - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo

- -

argo

-

argo is the command line interface to Argo

-

Synopsis

-

You can use the CLI in the following modes:

-

Kubernetes API Mode (default)

-

Requests are sent directly to the Kubernetes API. No Argo Server is needed. Large workflows and the workflow archive are not supported.

-

Use when you have direct access to the Kubernetes API, and don't need large workflow or workflow archive support.

-

If you're using instance ID (which is very unlikely), you'll need to set it:

-
ARGO_INSTANCEID=your-instanceid
-
- -

Argo Server GRPC Mode

-

Requests are sent to the Argo Server API via GRPC (using HTTP/2). Large workflows and the workflow archive are supported. Network load-balancers that do not support HTTP/2 are not supported.

-

Use if you do not have access to the Kubernetes API (e.g. you're in another cluster), and you're running the Argo Server using a network load-balancer that support HTTP/2.

-

To enable, set ARGO_SERVER:

-
ARGO_SERVER=localhost:2746 ;# The format is "host:port" - do not prefix with "http" or "https"
-
- -

If you're have transport-layer security (TLS) enabled (i.e. you are running "argo server --secure" and therefore has HTTPS):

-
ARGO_SECURE=true
-
- -

If your server is running with self-signed certificates. Do not use in production:

-
ARGO_INSECURE_SKIP_VERIFY=true
-
- -

By default, the CLI uses your KUBECONFIG to determine default for ARGO_TOKEN and ARGO_NAMESPACE. You probably error with "no configuration has been provided". To prevent it:

-
KUBECONFIG=/dev/null
-
- -

You will then need to set:

-
ARGO_NAMESPACE=argo
-
- -

And:

-
ARGO_TOKEN='Bearer ******' ;# Should always start with "Bearer " or "Basic ".
-
- -

Argo Server HTTP1 Mode

-

As per GRPC mode, but uses HTTP. Can be used with ALB that does not support HTTP/2. The command "argo logs --since-time=2020...." will not work (due to time-type).

-

Use this when your network load-balancer does not support HTTP/2.

-

Use the same configuration as GRPC mode, but also set:

-
ARGO_HTTP1=true
-
- -

If your server is behind an ingress with a path (you'll be running "argo server --basehref /...) or "BASE_HREF=/... argo server"):

-
ARGO_BASE_HREF=/argo
-
- -
argo [flags]
-
-

Options

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-  -h, --help                           help for argo
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_archive/index.html b/cli/argo_archive/index.html index 0e7904f062db..3a9a03893f78 100644 --- a/cli/argo_archive/index.html +++ b/cli/argo_archive/index.html @@ -1,4056 +1,11 @@ - - - + - - - - - - - - - - - - argo archive - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo archive - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo archive

- -

argo archive

-

manage the workflow archive

-
argo archive [flags]
-
-

Options

-
  -h, --help   help for archive
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_archive/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_archive_delete/index.html b/cli/argo_archive_delete/index.html index 43829da63c79..269690cdfe96 100644 --- a/cli/argo_archive_delete/index.html +++ b/cli/argo_archive_delete/index.html @@ -1,4049 +1,11 @@ - - - + - - - - - - - - - - - - argo archive delete - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo archive delete - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo archive delete

- -

argo archive delete

-

delete a workflow in the archive

-
argo archive delete UID... [flags]
-
-

Options

-
  -h, --help   help for delete
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_archive_delete/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_archive_get/index.html b/cli/argo_archive_get/index.html index 01128abb036d..85953bd5955b 100644 --- a/cli/argo_archive_get/index.html +++ b/cli/argo_archive_get/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo archive get - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo archive get - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo archive get

- -

argo archive get

-

get a workflow in the archive

-
argo archive get UID [flags]
-
-

Options

-
  -h, --help            help for get
-  -o, --output string   Output format. One of: json|yaml|wide (default "wide")
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_archive_get/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_archive_list-label-keys/index.html b/cli/argo_archive_list-label-keys/index.html index 2842dbd71d34..5af9329e9729 100644 --- a/cli/argo_archive_list-label-keys/index.html +++ b/cli/argo_archive_list-label-keys/index.html @@ -1,4049 +1,11 @@ - - - + - - - - - - - - - - - - argo archive list-label-keys - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo archive list-label-keys - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo archive list-label-keys

- -

argo archive list-label-keys

-

list workflows label keys in the archive

-
argo archive list-label-keys [flags]
-
-

Options

-
  -h, --help   help for list-label-keys
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_archive_list-label-keys/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_archive_list-label-values/index.html b/cli/argo_archive_list-label-values/index.html index 914506bb3332..0eb79fcd9890 100644 --- a/cli/argo_archive_list-label-values/index.html +++ b/cli/argo_archive_list-label-values/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo archive list-label-values - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo archive list-label-values - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo archive list-label-values

- -

argo archive list-label-values

-

get workflow label values in the archive

-
argo archive list-label-values [flags]
-
-

Options

-
  -h, --help              help for list-label-values
-  -l, --selector string   Selector (label query) to query on, allows 1 value (e.g. -l key1)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_archive_list-label-values/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_archive_list/index.html b/cli/argo_archive_list/index.html index 64ce7557d2a0..4e7e5c329945 100644 --- a/cli/argo_archive_list/index.html +++ b/cli/argo_archive_list/index.html @@ -1,4052 +1,11 @@ - - - + - - - - - - - - - - - - argo archive list - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo archive list - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo archive list

- -

argo archive list

-

list workflows in the archive

-
argo archive list [flags]
-
-

Options

-
      --chunk-size int    Return large lists in chunks rather than all at once. Pass 0 to disable.
-  -h, --help              help for list
-  -o, --output string     Output format. One of: json|yaml|wide (default "wide")
-  -l, --selector string   Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_archive_list/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_archive_resubmit/index.html b/cli/argo_archive_resubmit/index.html index 1d4a82b44a90..7b5147f0e3e7 100644 --- a/cli/argo_archive_resubmit/index.html +++ b/cli/argo_archive_resubmit/index.html @@ -1,4101 +1,11 @@ - - - + - - - - - - - - - - - - argo archive resubmit - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo archive resubmit - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo archive resubmit

- -

argo archive resubmit

-

resubmit one or more workflows

-
argo archive resubmit [WORKFLOW...] [flags]
-
-

Examples

-
# Resubmit a workflow:
-
-  argo archive resubmit uid
-
-# Resubmit multiple workflows:
-
-  argo archive resubmit uid another-uid
-
-# Resubmit multiple workflows by label selector:
-
-  argo archive resubmit -l workflows.argoproj.io/test=true
-
-# Resubmit multiple workflows by field selector:
-
-  argo archive resubmit --field-selector metadata.namespace=argo
-
-# Resubmit and wait for completion:
-
-  argo archive resubmit --wait uid
-
-# Resubmit and watch until completion:
-
-  argo archive resubmit --watch uid
-
-# Resubmit and tail logs until completion:
-
-  argo archive resubmit --log uid
-
-

Options

-
      --field-selector string   Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
-  -h, --help                    help for resubmit
-      --log                     log the workflow until it completes
-      --memoized                re-use successful steps & outputs from the previous run
-  -o, --output string           Output format. One of: name|json|yaml|wide
-  -p, --parameter stringArray   input parameter to override on the original workflow spec
-      --priority int32          workflow priority
-  -l, --selector string         Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
-  -w, --wait                    wait for the workflow to complete, only works when a single workflow is resubmitted
-      --watch                   watch the workflow until it completes, only works when a single workflow is resubmitted
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_archive_resubmit/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_archive_retry/index.html b/cli/argo_archive_retry/index.html index 4b6ef73f8cae..2500f8c2a9f8 100644 --- a/cli/argo_archive_retry/index.html +++ b/cli/argo_archive_retry/index.html @@ -1,4101 +1,11 @@ - - - + - - - - - - - - - - - - argo archive retry - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo archive retry - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo archive retry

- -

argo archive retry

-

retry zero or more workflows

-
argo archive retry [WORKFLOW...] [flags]
-
-

Examples

-
# Retry a workflow:
-
-  argo archive retry uid
-
-# Retry multiple workflows:
-
-  argo archive retry uid another-uid
-
-# Retry multiple workflows by label selector:
-
-  argo archive retry -l workflows.argoproj.io/test=true
-
-# Retry multiple workflows by field selector:
-
-  argo archive retry --field-selector metadata.namespace=argo
-
-# Retry and wait for completion:
-
-  argo archive retry --wait uid
-
-# Retry and watch until completion:
-
-  argo archive retry --watch uid
-
-# Retry and tail logs until completion:
-
-  argo archive retry --log uid
-
-

Options

-
      --field-selector string        Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
-  -h, --help                         help for retry
-      --log                          log the workflow until it completes
-      --node-field-selector string   selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc
-  -o, --output string                Output format. One of: name|json|yaml|wide
-  -p, --parameter stringArray        input parameter to override on the original workflow spec
-      --restart-successful           indicates to restart successful nodes matching the --node-field-selector
-  -l, --selector string              Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
-  -w, --wait                         wait for the workflow to complete, only works when a single workflow is retried
-      --watch                        watch the workflow until it completes, only works when a single workflow is retried
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_archive_retry/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_auth/index.html b/cli/argo_auth/index.html index 8958ebeb4a2f..335efc45cc07 100644 --- a/cli/argo_auth/index.html +++ b/cli/argo_auth/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo auth - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo auth - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo auth

- -

argo auth

-

manage authentication settings

-
argo auth [flags]
-
-

Options

-
  -h, --help   help for auth
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_auth/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_auth_token/index.html b/cli/argo_auth_token/index.html index bb01cf0fbfab..63dbc0d8acc6 100644 --- a/cli/argo_auth_token/index.html +++ b/cli/argo_auth_token/index.html @@ -1,4049 +1,11 @@ - - - + - - - - - - - - - - - - argo auth token - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo auth token - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo auth token

- -

argo auth token

-

Print the auth token

-
argo auth token [flags]
-
-

Options

-
  -h, --help   help for token
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo auth - manage authentication settings
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_auth_token/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cluster-template/index.html b/cli/argo_cluster-template/index.html index 16fd0efcdbd2..a903754efd8a 100644 --- a/cli/argo_cluster-template/index.html +++ b/cli/argo_cluster-template/index.html @@ -1,4054 +1,11 @@ - - - + - - - - - - - - - - - - argo cluster-template - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cluster-template - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cluster-template

- -

argo cluster-template

-

manipulate cluster workflow templates

-
argo cluster-template [flags]
-
-

Options

-
  -h, --help   help for cluster-template
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cluster-template/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cluster-template_create/index.html b/cli/argo_cluster-template_create/index.html index f54da7cfc88b..06e1586a02e4 100644 --- a/cli/argo_cluster-template_create/index.html +++ b/cli/argo_cluster-template_create/index.html @@ -1,4075 +1,11 @@ - - - + - - - - - - - - - - - - argo cluster-template create - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cluster-template create - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cluster-template create

- -

argo cluster-template create

-

create a cluster workflow template

-
argo cluster-template create FILE1 FILE2... [flags]
-
-

Examples

-
# Create a Cluster Workflow Template:
-  argo cluster-template create FILE1
-
-# Create a Cluster Workflow Template and print it as YAML:
-  argo cluster-template create FILE1 --output yaml
-
-# Create a Cluster Workflow Template with relaxed validation:
-  argo cluster-template create FILE1 --strict false
-
-

Options

-
  -h, --help            help for create
-  -o, --output string   Output format. One of: name|json|yaml|wide
-      --strict          perform strict workflow validation (default true)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cluster-template_create/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cluster-template_delete/index.html b/cli/argo_cluster-template_delete/index.html index e54956fc602a..a7034752ac62 100644 --- a/cli/argo_cluster-template_delete/index.html +++ b/cli/argo_cluster-template_delete/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo cluster-template delete - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cluster-template delete - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cluster-template delete

- -

argo cluster-template delete

-

delete a cluster workflow template

-
argo cluster-template delete WORKFLOW_TEMPLATE [flags]
-
-

Options

-
      --all    Delete all cluster workflow templates
-  -h, --help   help for delete
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cluster-template_delete/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cluster-template_get/index.html b/cli/argo_cluster-template_get/index.html index 9fa7467d8627..13f3052474dc 100644 --- a/cli/argo_cluster-template_get/index.html +++ b/cli/argo_cluster-template_get/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo cluster-template get - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cluster-template get - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cluster-template get

- -

argo cluster-template get

-

display details about a cluster workflow template

-
argo cluster-template get CLUSTER WORKFLOW_TEMPLATE... [flags]
-
-

Options

-
  -h, --help            help for get
-  -o, --output string   Output format. One of: json|yaml|wide
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cluster-template_get/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cluster-template_lint/index.html b/cli/argo_cluster-template_lint/index.html index 91e3b9a1d15e..c26c8221d880 100644 --- a/cli/argo_cluster-template_lint/index.html +++ b/cli/argo_cluster-template_lint/index.html @@ -1,4051 +1,11 @@ - - - + - - - - - - - - - - - - argo cluster-template lint - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cluster-template lint - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cluster-template lint

- -

argo cluster-template lint

-

validate files or directories of cluster workflow template manifests

-
argo cluster-template lint FILE... [flags]
-
-

Options

-
  -h, --help            help for lint
-  -o, --output string   Linting results output format. One of: pretty|simple (default "pretty")
-      --strict          perform strict workflow validation (default true)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cluster-template_lint/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cluster-template_list/index.html b/cli/argo_cluster-template_list/index.html index 09883c8c113a..87b220eede29 100644 --- a/cli/argo_cluster-template_list/index.html +++ b/cli/argo_cluster-template_list/index.html @@ -1,4074 +1,11 @@ - - - + - - - - - - - - - - - - argo cluster-template list - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cluster-template list - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cluster-template list

- -

argo cluster-template list

-

list cluster workflow templates

-
argo cluster-template list [flags]
-
-

Examples

-
# List Cluster Workflow Templates:
-  argo cluster-template list
-
-# List Cluster Workflow Templates with additional details such as labels, annotations, and status:
-  argo cluster-template list --output wide
-
-# List Cluster Workflow Templates by name only:
-  argo cluster-template list -o name
-
-

Options

-
  -h, --help            help for list
-  -o, --output string   Output format. One of: wide|name
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cluster-template_list/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_completion/index.html b/cli/argo_completion/index.html index 8d7de59b0b61..b8db3cc9c7b0 100644 --- a/cli/argo_completion/index.html +++ b/cli/argo_completion/index.html @@ -1,4071 +1,11 @@ - - - + - - - - - - - - - - - - argo completion - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo completion - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo completion

- -

argo completion

-

output shell completion code for the specified shell (bash or zsh)

-

Synopsis

-

Write bash or zsh shell completion code to standard output.

-

For bash, ensure you have bash completions installed and enabled. -To access completions in your current shell, run -$ source <(argo completion bash) -Alternatively, write it to a file and source in .bash_profile

-

For zsh, output to a file in a directory referenced by the $fpath shell -variable.

-
argo completion SHELL [flags]
-
-

Options

-
  -h, --help   help for completion
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_completion/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cp/index.html b/cli/argo_cp/index.html index e1e3bd9d4905..4ea3aaace430 100644 --- a/cli/argo_cp/index.html +++ b/cli/argo_cp/index.html @@ -1,4076 +1,11 @@ - - - + - - - - - - - - - - - - argo cp - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cp - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cp

- -

argo cp

-

copy artifacts from workflow

-
argo cp my-wf output-directory ... [flags]
-
-

Examples

-
# Copy a workflow's artifacts to a local output directory:
-
-  argo cp my-wf output-directory
-
-# Copy artifacts from a specific node in a workflow to a local output directory:
-
-  argo cp my-wf output-directory --node-id=my-wf-node-id-123
-
-

Options

-
      --artifact-name string   name of output artifact in workflow
-  -h, --help                   help for cp
-  -n, --namespace string       namespace of workflow
-      --node-id string         id of node in workflow
-      --path string            use variables {workflowName}, {nodeId}, {templateName}, {artifactName}, and {namespace} to create a customized path to store the artifacts; example: {workflowName}/{templateName}/{artifactName} (default "{namespace}/{workflowName}/{nodeId}/outputs/{artifactName}")
-      --template-name string   name of template in workflow
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cp/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cron/index.html b/cli/argo_cron/index.html index ee99158d8097..b55ca3dce350 100644 --- a/cli/argo_cron/index.html +++ b/cli/argo_cron/index.html @@ -1,4072 +1,11 @@ - - - + - - - - - - - - - - - - argo cron - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cron - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cron

- -

argo cron

-

manage cron workflows

-

Synopsis

-

NextScheduledRun assumes that the workflow-controller uses UTC as its timezone

-
argo cron [flags]
-
-

Options

-
  -h, --help   help for cron
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cron/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cron_create/index.html b/cli/argo_cron_create/index.html index f0d56f5ba49f..af7e55768977 100644 --- a/cli/argo_cron_create/index.html +++ b/cli/argo_cron_create/index.html @@ -1,4059 +1,11 @@ - - - + - - - - - - - - - - - - argo cron create - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cron create - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cron create

- -

argo cron create

-

create a cron workflow

-
argo cron create FILE1 FILE2... [flags]
-
-

Options

-
      --entrypoint string       override entrypoint
-      --generate-name string    override metadata.generateName
-  -h, --help                    help for create
-  -l, --labels string           Comma separated labels to apply to the workflow. Will override previous values.
-      --name string             override metadata.name
-  -o, --output string           Output format. One of: name|json|yaml|wide
-  -p, --parameter stringArray   pass an input parameter
-  -f, --parameter-file string   pass a file containing all input parameters
-      --schedule string         override cron workflow schedule
-      --serviceaccount string   run all pods in the workflow using specified serviceaccount
-      --strict                  perform strict workflow validation (default true)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cron_create/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cron_delete/index.html b/cli/argo_cron_delete/index.html index 89dbdae0721e..049e11ad69c5 100644 --- a/cli/argo_cron_delete/index.html +++ b/cli/argo_cron_delete/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo cron delete - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cron delete - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cron delete

- -

argo cron delete

-

delete a cron workflow

-
argo cron delete [CRON_WORKFLOW... | --all] [flags]
-
-

Options

-
      --all    Delete all cron workflows
-  -h, --help   help for delete
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cron_delete/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cron_get/index.html b/cli/argo_cron_get/index.html index 93d2ba4874da..9896f4c1c152 100644 --- a/cli/argo_cron_get/index.html +++ b/cli/argo_cron_get/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo cron get - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cron get - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cron get

- -

argo cron get

-

display details about a cron workflow

-
argo cron get CRON_WORKFLOW... [flags]
-
-

Options

-
  -h, --help            help for get
-  -o, --output string   Output format. One of: json|yaml|wide
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cron_get/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cron_lint/index.html b/cli/argo_cron_lint/index.html index b576e543d2e3..8f778cf20b52 100644 --- a/cli/argo_cron_lint/index.html +++ b/cli/argo_cron_lint/index.html @@ -1,4051 +1,11 @@ - - - + - - - - - - - - - - - - argo cron lint - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cron lint - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cron lint

- -

argo cron lint

-

validate files or directories of cron workflow manifests

-
argo cron lint FILE... [flags]
-
-

Options

-
  -h, --help            help for lint
-  -o, --output string   Linting results output format. One of: pretty|simple (default "pretty")
-      --strict          perform strict validation (default true)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cron_lint/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cron_list/index.html b/cli/argo_cron_list/index.html index 7c9b28746a99..a47b85f3423f 100644 --- a/cli/argo_cron_list/index.html +++ b/cli/argo_cron_list/index.html @@ -1,4052 +1,11 @@ - - - + - - - - - - - - - - - - argo cron list - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cron list - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cron list

- -

argo cron list

-

list cron workflows

-
argo cron list [flags]
-
-

Options

-
  -A, --all-namespaces    Show workflows from all namespaces
-  -h, --help              help for list
-  -o, --output string     Output format. One of: wide|name
-  -l, --selector string   Selector (label query) to filter on, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2). Matching objects must satisfy all of the specified label constraints.
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cron_list/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cron_resume/index.html b/cli/argo_cron_resume/index.html index d8d933626d62..9ad51709cc8d 100644 --- a/cli/argo_cron_resume/index.html +++ b/cli/argo_cron_resume/index.html @@ -1,4049 +1,11 @@ - - - + - - - - - - - - - - - - argo cron resume - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cron resume - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cron resume

- -

argo cron resume

-

resume zero or more cron workflows

-
argo cron resume [CRON_WORKFLOW...] [flags]
-
-

Options

-
  -h, --help   help for resume
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cron_resume/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_cron_suspend/index.html b/cli/argo_cron_suspend/index.html index 107634bc49cd..47c3eb9ce9cb 100644 --- a/cli/argo_cron_suspend/index.html +++ b/cli/argo_cron_suspend/index.html @@ -1,4049 +1,11 @@ - - - + - - - - - - - - - - - - argo cron suspend - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo cron suspend - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo cron suspend

- -

argo cron suspend

-

suspend zero or more cron workflows

-
argo cron suspend CRON_WORKFLOW... [flags]
-
-

Options

-
  -h, --help   help for suspend
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_cron_suspend/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_delete/index.html b/cli/argo_delete/index.html index 38cf119a5dca..d4543d8f867d 100644 --- a/cli/argo_delete/index.html +++ b/cli/argo_delete/index.html @@ -1,4084 +1,11 @@ - - - + - - - - - - - - - - - - argo delete - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo delete - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo delete

- -

argo delete

-

delete workflows

-
argo delete [--dry-run] [WORKFLOW...|[--all] [--older] [--completed] [--resubmitted] [--prefix PREFIX] [--selector SELECTOR] [--force] [--status STATUS] ] [flags]
-
-

Examples

-
# Delete a workflow:
-
-  argo delete my-wf
-
-# Delete the latest workflow:
-
-  argo delete @latest
-
-

Options

-
      --all                     Delete all workflows
-  -A, --all-namespaces          Delete workflows from all namespaces
-      --completed               Delete completed workflows
-      --dry-run                 Do not delete the workflow, only print what would happen
-      --field-selector string   Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
-      --force                   Force delete workflows by removing finalizers
-  -h, --help                    help for delete
-      --older string            Delete completed workflows finished before the specified duration (e.g. 10m, 3h, 1d)
-      --prefix string           Delete workflows by prefix
-      --query-chunk-size int    Run the list query in chunks (deletes will still be executed individually)
-      --resubmitted             Delete resubmitted workflows
-  -l, --selector string         Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
-      --status strings          Delete by status (comma separated)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_delete/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_executor-plugin/index.html b/cli/argo_executor-plugin/index.html index 00ecd0e83089..4497b8108138 100644 --- a/cli/argo_executor-plugin/index.html +++ b/cli/argo_executor-plugin/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo executor-plugin - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo executor-plugin - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo executor-plugin

- -

argo executor-plugin

-

manage executor plugins

-
argo executor-plugin [flags]
-
-

Options

-
  -h, --help   help for executor-plugin
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_executor-plugin/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_executor-plugin_build/index.html b/cli/argo_executor-plugin_build/index.html index a5c78dd26764..c2816907aaa9 100644 --- a/cli/argo_executor-plugin_build/index.html +++ b/cli/argo_executor-plugin_build/index.html @@ -1,4049 +1,11 @@ - - - + - - - - - - - - - - - - argo executor-plugin build - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo executor-plugin build - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo executor-plugin build

- -

argo executor-plugin build

-

build an executor plugin

-
argo executor-plugin build DIR [flags]
-
-

Options

-
  -h, --help   help for build
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_executor-plugin_build/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_get/index.html b/cli/argo_get/index.html index 42f0d6745020..bebe5d2c2039 100644 --- a/cli/argo_get/index.html +++ b/cli/argo_get/index.html @@ -1,4076 +1,11 @@ - - - + - - - - - - - - - - - - argo get - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo get - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo get

- -

argo get

-

display details about a workflow

-
argo get WORKFLOW... [flags]
-
-

Examples

-
# Get information about a workflow:
-
-  argo get my-wf
-
-# Get the latest workflow:
-  argo get @latest
-
-

Options

-
  -h, --help                         help for get
-      --no-color                     Disable colorized output
-      --no-utf8                      Use plain 7-bits ascii characters
-      --node-field-selector string   selector of node to display, eg: --node-field-selector phase=abc
-  -o, --output string                Output format. One of: json|yaml|short|wide
-      --status string                Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_get/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_lint/index.html b/cli/argo_lint/index.html index a1fae2858f8f..94ab9c4aa3f4 100644 --- a/cli/argo_lint/index.html +++ b/cli/argo_lint/index.html @@ -1,4076 +1,11 @@ - - - + - - - - - - - - - - - - argo lint - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo lint - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo lint

- -

argo lint

-

validate files or directories of manifests

-
argo lint FILE... [flags]
-
-

Examples

-
# Lint all manifests in a specified directory:
-
-  argo lint ./manifests
-
-# Lint only manifests of Workflows and CronWorkflows from stdin:
-
-  cat manifests.yaml | argo lint --kinds=workflows,cronworkflows -
-
-

Options

-
  -h, --help            help for lint
-      --kinds strings   Which kinds will be linted. Can be: workflows|workflowtemplates|cronworkflows|clusterworkflowtemplates (default [all])
-      --offline         perform offline linting. For resources referencing other resources, the references will be resolved from the provided args
-  -o, --output string   Linting results output format. One of: pretty|simple (default "pretty")
-      --strict          Perform strict workflow validation (default true)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_lint/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_list/index.html b/cli/argo_list/index.html index cae23719382f..dce535aec7e6 100644 --- a/cli/argo_list/index.html +++ b/cli/argo_list/index.html @@ -1,4104 +1,11 @@ - - - + - - - - - - - - - - - - argo list - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo list - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo list

- -

argo list

-

list workflows

-
argo list [flags]
-
-

Examples

-
# List all workflows:
-  argo list
-
-# List all workflows from all namespaces:
-  argo list -A
-
-# List all running workflows:
-  argo list --running
-
-# List all completed workflows:
-  argo list --completed
-
- # List workflows created within the last 10m:
-  argo list --since 10m
-
-# List workflows that finished more than 2h ago:
-  argo list --older 2h
-
-# List workflows with more information (such as parameters):
-  argo list -o wide
-
-# List workflows in YAML format:
-  argo list -o yaml
-
-# List workflows that have both labels:
-  argo list -l label1=value1,label2=value2
-
-

Options

-
  -A, --all-namespaces          Show workflows from all namespaces
-      --chunk-size int          Return large lists in chunks rather than all at once. Pass 0 to disable.
-      --completed               Show completed workflows. Mutually exclusive with --running.
-      --field-selector string   Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
-  -h, --help                    help for list
-      --no-headers              Don't print headers (default print headers).
-      --older string            List completed workflows finished before the specified duration (e.g. 10m, 3h, 1d)
-  -o, --output string           Output format. One of: name|wide|yaml|json
-      --prefix string           Filter workflows by prefix
-      --resubmitted             Show resubmitted workflows
-      --running                 Show running workflows. Mutually exclusive with --completed.
-  -l, --selector string         Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
-      --since string            Show only workflows created after than a relative duration
-      --status strings          Filter by status (comma separated)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_list/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_logs/index.html b/cli/argo_logs/index.html index 991ec46ae2da..099d852fd0ff 100644 --- a/cli/argo_logs/index.html +++ b/cli/argo_logs/index.html @@ -1,4101 +1,11 @@ - - - + - - - - - - - - - - - - argo logs - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo logs - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo logs

- -

argo logs

-

view logs of a pod or workflow

-
argo logs WORKFLOW [POD] [flags]
-
-

Examples

-
# Print the logs of a workflow:
-
-  argo logs my-wf
-
-# Follow the logs of a workflows:
-
-  argo logs my-wf --follow
-
-# Print the logs of a workflows with a selector:
-
-  argo logs my-wf -l app=sth
-
-# Print the logs of single container in a pod
-
-  argo logs my-wf my-pod -c my-container
-
-# Print the logs of a workflow's pods:
-
-  argo logs my-wf my-pod
-
-# Print the logs of a pods:
-
-  argo logs --since=1h my-pod
-
-# Print the logs of the latest workflow:
-  argo logs @latest
-
-

Options

-
  -c, --container string    Print the logs of this container (default "main")
-  -f, --follow              Specify if the logs should be streamed.
-      --grep string         grep for lines
-  -h, --help                help for logs
-      --no-color            Disable colorized output
-  -p, --previous            Specify if the previously terminated container logs should be returned.
-  -l, --selector string     log selector for some pod
-      --since duration      Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used.
-      --since-time string   Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used.
-      --tail int            If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime (default -1)
-      --timestamps          Include timestamps on each line in the log output
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_logs/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_node/index.html b/cli/argo_node/index.html index 94dfb28e7d5c..54952f45d8d2 100644 --- a/cli/argo_node/index.html +++ b/cli/argo_node/index.html @@ -1,4076 +1,11 @@ - - - + - - - - - - - - - - - - argo node - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo node - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo node

- -

argo node

-

perform action on a node in a workflow

-
argo node ACTION WORKFLOW FLAGS [flags]
-
-

Examples

-
# Set outputs to a node within a workflow:
-
-  argo node set my-wf --output-parameter parameter-name="Hello, world!" --node-field-selector displayName=approve
-
-# Set the message of a node within a workflow:
-
-  argo node set my-wf --message "We did it!"" --node-field-selector displayName=approve
-
-

Options

-
  -h, --help                           help for node
-  -m, --message string                 Set the message of a node, eg: --message "Hello, world!"
-      --node-field-selector string     Selector of node to set, eg: --node-field-selector inputs.paramaters.myparam.value=abc
-  -p, --output-parameter stringArray   Set a "supplied" output parameter of node, eg: --output-parameter parameter-name="Hello, world!"
-      --phase string                   Phase to set the node to, eg: --phase Succeeded
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_node/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_resubmit/index.html b/cli/argo_resubmit/index.html index 931881a1c7c3..f6103ef9f2a4 100644 --- a/cli/argo_resubmit/index.html +++ b/cli/argo_resubmit/index.html @@ -1,4121 +1,11 @@ - - - + - - - - - - - - - - - - argo resubmit - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo resubmit - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo resubmit

- -

argo resubmit

-

resubmit one or more workflows

-

Synopsis

-

Submit a completed workflow again. Optionally override parameters and memoize. Similar to running argo submit again with the same parameters.

-
argo resubmit [WORKFLOW...] [flags]
-
-

Examples

-
# Resubmit a workflow:
-
-  argo resubmit my-wf
-
-# Resubmit multiple workflows:
-
-  argo resubmit my-wf my-other-wf my-third-wf
-
-# Resubmit multiple workflows by label selector:
-
-  argo resubmit -l workflows.argoproj.io/test=true
-
-# Resubmit multiple workflows by field selector:
-
-  argo resubmit --field-selector metadata.namespace=argo
-
-# Resubmit and wait for completion:
-
-  argo resubmit --wait my-wf.yaml
-
-# Resubmit and watch until completion:
-
-  argo resubmit --watch my-wf.yaml
-
-# Resubmit and tail logs until completion:
-
-  argo resubmit --log my-wf.yaml
-
-# Resubmit the latest workflow:
-
-  argo resubmit @latest
-
-

Options

-
      --field-selector string   Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
-  -h, --help                    help for resubmit
-      --log                     log the workflow until it completes
-      --memoized                re-use successful steps & outputs from the previous run
-  -o, --output string           Output format. One of: name|json|yaml|wide
-  -p, --parameter stringArray   input parameter to override on the original workflow spec
-      --priority int32          workflow priority
-  -l, --selector string         Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
-  -w, --wait                    wait for the workflow to complete, only works when a single workflow is resubmitted
-      --watch                   watch the workflow until it completes, only works when a single workflow is resubmitted
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_resubmit/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_resume/index.html b/cli/argo_resume/index.html index 192729da9a82..64c20aabb97b 100644 --- a/cli/argo_resume/index.html +++ b/cli/argo_resume/index.html @@ -1,4081 +1,11 @@ - - - + - - - - - - - - - - - - argo resume - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo resume - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo resume

- -

argo resume

-

resume zero or more workflows (opposite of suspend)

-
argo resume WORKFLOW1 WORKFLOW2... [flags]
-
-

Examples

-
# Resume a workflow that has been suspended:
-
-  argo resume my-wf
-
-# Resume multiple workflows:
-
-  argo resume my-wf my-other-wf my-third-wf     
-
-# Resume the latest workflow:
-
-  argo resume @latest
-
-# Resume multiple workflows by node field selector:
-
-  argo resume --node-field-selector inputs.paramaters.myparam.value=abc     
-
-

Options

-
  -h, --help                         help for resume
-      --node-field-selector string   selector of node to resume, eg: --node-field-selector inputs.paramaters.myparam.value=abc
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_resume/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_retry/index.html b/cli/argo_retry/index.html index 47f0981a89b1..58edb3767d4a 100644 --- a/cli/argo_retry/index.html +++ b/cli/argo_retry/index.html @@ -1,4124 +1,11 @@ - - - + - - - - - - - - - - - - argo retry - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo retry - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo retry

- -

argo retry

-

retry zero or more workflows

-

Synopsis

-

Rerun a failed Workflow. Specifically, rerun all failed steps. The same Workflow object is used and no new Workflows are created.

-
argo retry [WORKFLOW...] [flags]
-
-

Examples

-
# Retry a workflow:
-
-  argo retry my-wf
-
-# Retry multiple workflows:
-
-  argo retry my-wf my-other-wf my-third-wf
-
-# Retry multiple workflows by label selector:
-
-  argo retry -l workflows.argoproj.io/test=true
-
-# Retry multiple workflows by field selector:
-
-  argo retry --field-selector metadata.namespace=argo
-
-# Retry and wait for completion:
-
-  argo retry --wait my-wf.yaml
-
-# Retry and watch until completion:
-
-  argo retry --watch my-wf.yaml
-
-# Retry and tail logs until completion:
-
-  argo retry --log my-wf.yaml
-
-# Retry the latest workflow:
-
-  argo retry @latest
-
-# Restart node with id 5 on successful workflow, using node-field-selector
-  argo retry my-wf --restart-successful --node-field-selector id=5
-
-

Options

-
      --field-selector string        Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
-  -h, --help                         help for retry
-      --log                          log the workflow until it completes
-      --node-field-selector string   selector of nodes to reset, eg: --node-field-selector inputs.paramaters.myparam.value=abc
-  -o, --output string                Output format. One of: name|json|yaml|wide
-  -p, --parameter stringArray        input parameter to override on the original workflow spec
-      --restart-successful           indicates to restart successful nodes matching the --node-field-selector
-  -l, --selector string              Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
-  -w, --wait                         wait for the workflow to complete, only works when a single workflow is retried
-      --watch                        watch the workflow until it completes, only works when a single workflow is retried
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_retry/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_server/index.html b/cli/argo_server/index.html index 3c55e451ac7f..823269b967ed 100644 --- a/cli/argo_server/index.html +++ b/cli/argo_server/index.html @@ -1,4085 +1,11 @@ - - - + - - - - - - - - - - - - argo server - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo server - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo server

- -

argo server

-

start the Argo Server

-
argo server [flags]
-
-

Examples

-
See https://argoproj.github.io/argo-workflows/argo-server/
-
-

Options

-
      --access-control-allow-origin string   Set Access-Control-Allow-Origin header in HTTP responses.
-      --allowed-link-protocol stringArray    Allowed link protocol in configMap. Used if the allowed configMap links protocol are different from http,https. Defaults to the environment variable ALLOWED_LINK_PROTOCOL (default [http,https])
-      --api-rate-limit uint                  Set limit per IP for api ratelimiter (default 1000)
-      --auth-mode stringArray                API server authentication mode. Any 1 or more length permutation of: client,server,sso (default [client])
-      --basehref string                      Value for base href in index.html. Used if the server is running behind reverse proxy under subpath different from /. Defaults to the environment variable BASE_HREF. (default "/")
-  -b, --browser                              enable automatic launching of the browser [local mode]
-      --configmap string                     Name of K8s configmap to retrieve workflow controller configuration (default "workflow-controller-configmap")
-      --event-async-dispatch                 dispatch event async
-      --event-operation-queue-size int       how many events operations that can be queued at once (default 16)
-      --event-worker-count int               how many event workers to run (default 4)
-  -h, --help                                 help for server
-      --hsts                                 Whether or not we should add a HTTP Secure Transport Security header. This only has effect if secure is enabled. (default true)
-      --kube-api-burst int                   Burst to use while talking with kube-apiserver. (default 30)
-      --kube-api-qps float32                 QPS to use while talking with kube-apiserver. (default 20)
-      --log-format string                    The formatter to use for logs. One of: text|json (default "text")
-      --managed-namespace string             namespace that watches, default to the installation namespace
-      --namespaced                           run as namespaced mode
-  -p, --port int                             Port to listen on (default 2746)
-  -e, --secure                               Whether or not we should listen on TLS. (default true)
-      --tls-certificate-secret-name string   The name of a Kubernetes secret that contains the server certificates
-      --x-frame-options string               Set X-Frame-Options header in HTTP responses. (default "DENY")
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_server/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_stop/index.html b/cli/argo_stop/index.html index e66999d4d621..b785dd012be7 100644 --- a/cli/argo_stop/index.html +++ b/cli/argo_stop/index.html @@ -1,4101 +1,11 @@ - - - + - - - - - - - - - - - - argo stop - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo stop - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo stop

- -

argo stop

-

stop zero or more workflows allowing all exit handlers to run

-

Synopsis

-

Stop a workflow but still run exit handlers.

-
argo stop WORKFLOW WORKFLOW2... [flags]
-
-

Examples

-
# Stop a workflow:
-
-  argo stop my-wf
-
-# Stop the latest workflow:
-
-  argo stop @latest
-
-# Stop multiple workflows by label selector
-
-  argo stop -l workflows.argoproj.io/test=true
-
-# Stop multiple workflows by field selector
-
-  argo stop --field-selector metadata.namespace=argo
-
-

Options

-
      --dry-run                      If true, only print the workflows that would be stopped, without stopping them.
-      --field-selector string        Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
-  -h, --help                         help for stop
-      --message string               Message to add to previously running nodes
-      --node-field-selector string   selector of node to stop, eg: --node-field-selector inputs.paramaters.myparam.value=abc
-  -l, --selector string              Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_stop/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_submit/index.html b/cli/argo_submit/index.html index 8a9ea11e1d7e..68d0e1d87d47 100644 --- a/cli/argo_submit/index.html +++ b/cli/argo_submit/index.html @@ -1,4103 +1,11 @@ - - - + - - - - - - - - - - - - argo submit - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo submit - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo submit

- -

argo submit

-

submit a workflow

-
argo submit [FILE... | --from `kind/name] [flags]
-
-

Examples

-
# Submit multiple workflows from files:
-
-  argo submit my-wf.yaml
-
-# Submit and wait for completion:
-
-  argo submit --wait my-wf.yaml
-
-# Submit and watch until completion:
-
-  argo submit --watch my-wf.yaml
-
-# Submit and tail logs until completion:
-
-  argo submit --log my-wf.yaml
-
-# Submit a single workflow from an existing resource
-
-  argo submit --from cronwf/my-cron-wf
-
-

Options

-
      --dry-run                      modify the workflow on the client-side without creating it
-      --entrypoint string            override entrypoint
-      --from kind/name               Submit from an existing kind/name E.g., --from=cronwf/hello-world-cwf
-      --generate-name string         override metadata.generateName
-  -h, --help                         help for submit
-  -l, --labels string                Comma separated labels to apply to the workflow. Will override previous values.
-      --log                          log the workflow until it completes
-      --name string                  override metadata.name
-      --node-field-selector string   selector of node to display, eg: --node-field-selector phase=abc
-  -o, --output string                Output format. One of: name|json|yaml|wide
-  -p, --parameter stringArray        pass an input parameter
-  -f, --parameter-file string        pass a file containing all input parameters
-      --priority int32               workflow priority
-      --scheduled-time string        Override the workflow's scheduledTime parameter (useful for backfilling). The time must be RFC3339
-      --server-dry-run               send request to server with dry-run flag which will modify the workflow without creating it
-      --serviceaccount string        run all pods in the workflow using specified serviceaccount
-      --status string                Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error). Should only be used with --watch.
-      --strict                       perform strict workflow validation (default true)
-  -w, --wait                         wait for the workflow to complete
-      --watch                        watch the workflow until it completes
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_submit/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_suspend/index.html b/cli/argo_suspend/index.html index 41bb6710159f..76fd8e421309 100644 --- a/cli/argo_suspend/index.html +++ b/cli/argo_suspend/index.html @@ -1,4071 +1,11 @@ - - - + - - - - - - - - - - - - argo suspend - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo suspend - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo suspend

- -

argo suspend

-

suspend zero or more workflows (opposite of resume)

-
argo suspend WORKFLOW1 WORKFLOW2... [flags]
-
-

Examples

-
# Suspend a workflow:
-
-  argo suspend my-wf
-
-# Suspend the latest workflow:
-  argo suspend @latest
-
-

Options

-
  -h, --help   help for suspend
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_suspend/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_template/index.html b/cli/argo_template/index.html index 3874c1252dd1..e69c523041d6 100644 --- a/cli/argo_template/index.html +++ b/cli/argo_template/index.html @@ -1,4054 +1,11 @@ - - - + - - - - - - - - - - - - argo template - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo template - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo template

- -

argo template

-

manipulate workflow templates

-
argo template [flags]
-
-

Options

-
  -h, --help   help for template
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_template/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_template_create/index.html b/cli/argo_template_create/index.html index e1ed72e9bdb3..3047ad53e688 100644 --- a/cli/argo_template_create/index.html +++ b/cli/argo_template_create/index.html @@ -1,4051 +1,11 @@ - - - + - - - - - - - - - - - - argo template create - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo template create - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo template create

- -

argo template create

-

create a workflow template

-
argo template create FILE1 FILE2... [flags]
-
-

Options

-
  -h, --help            help for create
-  -o, --output string   Output format. One of: name|json|yaml|wide
-      --strict          perform strict workflow validation (default true)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_template_create/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_template_delete/index.html b/cli/argo_template_delete/index.html index 17d1bfcb9333..5d869b5cc2c2 100644 --- a/cli/argo_template_delete/index.html +++ b/cli/argo_template_delete/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo template delete - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo template delete - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo template delete

- -

argo template delete

-

delete a workflow template

-
argo template delete WORKFLOW_TEMPLATE [flags]
-
-

Options

-
      --all    Delete all workflow templates
-  -h, --help   help for delete
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_template_delete/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_template_get/index.html b/cli/argo_template_get/index.html index e08a9d2d8057..5f95a9be8b99 100644 --- a/cli/argo_template_get/index.html +++ b/cli/argo_template_get/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo template get - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo template get - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo template get

- -

argo template get

-

display details about a workflow template

-
argo template get WORKFLOW_TEMPLATE... [flags]
-
-

Options

-
  -h, --help            help for get
-  -o, --output string   Output format. One of: json|yaml|wide
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_template_get/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_template_lint/index.html b/cli/argo_template_lint/index.html index 0d6dd21f62dc..691233886c5b 100644 --- a/cli/argo_template_lint/index.html +++ b/cli/argo_template_lint/index.html @@ -1,4051 +1,11 @@ - - - + - - - - - - - - - - - - argo template lint - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo template lint - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo template lint

- -

argo template lint

-

validate a file or directory of workflow template manifests

-
argo template lint (DIRECTORY | FILE1 FILE2 FILE3...) [flags]
-
-

Options

-
  -h, --help            help for lint
-  -o, --output string   Linting results output format. One of: pretty|simple (default "pretty")
-      --strict          perform strict workflow validation (default true)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_template_lint/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_template_list/index.html b/cli/argo_template_list/index.html index f64b0eb8848d..9e29a5183d51 100644 --- a/cli/argo_template_list/index.html +++ b/cli/argo_template_list/index.html @@ -1,4051 +1,11 @@ - - - + - - - - - - - - - - - - argo template list - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo template list - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo template list

- -

argo template list

-

list workflow templates

-
argo template list [flags]
-
-

Options

-
  -A, --all-namespaces   Show workflows from all namespaces
-  -h, --help             help for list
-  -o, --output string    Output format. One of: wide|name
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_template_list/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_terminate/index.html b/cli/argo_terminate/index.html index a959f7e51c98..9dec7e6c0a86 100644 --- a/cli/argo_terminate/index.html +++ b/cli/argo_terminate/index.html @@ -1,4099 +1,11 @@ - - - + - - - - - - - - - - - - argo terminate - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo terminate - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo terminate

- -

argo terminate

-

terminate zero or more workflows immediately

-

Synopsis

-

Immediately stop a workflow and do not run any exit handlers.

-
argo terminate WORKFLOW WORKFLOW2... [flags]
-
-

Examples

-
# Terminate a workflow:
-
-  argo terminate my-wf
-
-# Terminate the latest workflow:
-
-  argo terminate @latest
-
-# Terminate multiple workflows by label selector
-
-  argo terminate -l workflows.argoproj.io/test=true
-
-# Terminate multiple workflows by field selector
-
-  argo terminate --field-selector metadata.namespace=argo
-
-

Options

-
      --dry-run                 Do not terminate the workflow, only print what would happen
-      --field-selector string   Selector (field query) to filter on, supports '=', '==', and '!='.(e.g. --field-selector key1=value1,key2=value2). The server only supports a limited number of field queries per type.
-  -h, --help                    help for terminate
-  -l, --selector string         Selector (label query) to filter on, not including uninitialized ones, supports '=', '==', and '!='.(e.g. -l key1=value1,key2=value2)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_terminate/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_version/index.html b/cli/argo_version/index.html index 59825e0337d1..7942d062d165 100644 --- a/cli/argo_version/index.html +++ b/cli/argo_version/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - argo version - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo version - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo version

- -

argo version

-

print version information

-
argo version [flags]
-
-

Options

-
  -h, --help    help for version
-      --short   print just the version number
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_version/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_wait/index.html b/cli/argo_wait/index.html index 69ce3a643040..74329db7a56b 100644 --- a/cli/argo_wait/index.html +++ b/cli/argo_wait/index.html @@ -1,4073 +1,11 @@ - - - + - - - - - - - - - - - - argo wait - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo wait - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo wait

- -

argo wait

-

waits for workflows to complete

-
argo wait [WORKFLOW...] [flags]
-
-

Examples

-
# Wait on a workflow:
-
-  argo wait my-wf
-
-# Wait on the latest workflow:
-
-  argo wait @latest
-
-

Options

-
  -h, --help               help for wait
-      --ignore-not-found   Ignore the wait if the workflow is not found
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_wait/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cli/argo_watch/index.html b/cli/argo_watch/index.html index ac40eb3a8126..a07d21a39a26 100644 --- a/cli/argo_watch/index.html +++ b/cli/argo_watch/index.html @@ -1,4074 +1,11 @@ - - - + - - - - - - - - - - - - argo watch - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + argo watch - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

argo watch

- -

argo watch

-

watch a workflow until it completes

-
argo watch WORKFLOW [flags]
-
-

Examples

-
# Watch a workflow:
-
-  argo watch my-wf
-
-# Watch the latest workflow:
-
-  argo watch @latest
-
-

Options

-
  -h, --help                         help for watch
-      --node-field-selector string   selector of node to display, eg: --node-field-selector phase=abc
-      --status string                Filter by status (Pending, Running, Succeeded, Skipped, Failed, Error)
-
-

Options inherited from parent commands

-
      --argo-base-href string          An path to use with HTTP client (e.g. due to BASE_HREF). Defaults to the ARGO_BASE_HREF environment variable.
-      --argo-http1                     If true, use the HTTP client. Defaults to the ARGO_HTTP1 environment variable.
-  -s, --argo-server host:port          API server host:port. e.g. localhost:2746. Defaults to the ARGO_SERVER environment variable.
-      --as string                      Username to impersonate for the operation
-      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
-      --as-uid string                  UID to impersonate for the operation
-      --certificate-authority string   Path to a cert file for the certificate authority
-      --client-certificate string      Path to a client certificate file for TLS
-      --client-key string              Path to a client key file for TLS
-      --cluster string                 The name of the kubeconfig cluster to use
-      --context string                 The name of the kubeconfig context to use
-      --gloglevel int                  Set the glog logging level
-  -H, --header strings                 Sets additional header to all requests made by Argo CLI. (Can be repeated multiple times to add multiple headers, also supports comma separated headers) Used only when either ARGO_HTTP1 or --argo-http1 is set to true.
-      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
-  -k, --insecure-skip-verify           If true, the Argo Server's certificate will not be checked for validity. This will make your HTTPS connections insecure. Defaults to the ARGO_INSECURE_SKIP_VERIFY environment variable.
-      --instanceid string              submit with a specific controller's instance id label. Default to the ARGO_INSTANCEID environment variable.
-      --kubeconfig string              Path to a kube config. Only required if out-of-cluster
-      --loglevel string                Set the logging level. One of: debug|info|warn|error (default "info")
-  -n, --namespace string               If present, the namespace scope for this CLI request
-      --password string                Password for basic authentication to the API server
-      --proxy-url string               If provided, this URL will be used to connect via proxy
-      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
-  -e, --secure                         Whether or not the server is using TLS with the Argo Server. Defaults to the ARGO_SECURE environment variable. (default true)
-      --server string                  The address and port of the Kubernetes API server
-      --tls-server-name string         If provided, this name will be used to validate server certificate. If this is not provided, hostname used to contact the server is used.
-      --token string                   Bearer token for authentication to the API server
-      --user string                    The name of the kubeconfig user to use
-      --username string                Username for basic authentication to the API server
-  -v, --verbose                        Enabled verbose logging, i.e. --loglevel debug
-
-

SEE ALSO

-
    -
  • argo - argo is the command line interface to Argo
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cli/argo_watch/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/client-libraries/index.html b/client-libraries/index.html index 1106c22465ad..a2ff5c296306 100644 --- a/client-libraries/index.html +++ b/client-libraries/index.html @@ -1,4028 +1,11 @@ - - - + - - - - - - - - - - - - Client Libraries - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Client Libraries - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Client Libraries

-

This page contains an overview of the client libraries for using the Argo API from various programming languages.

-

To write applications using the REST API, you do not need to implement the API calls and request/response types -yourself. You can use a client library for the programming language you are using.

-

Client libraries often handle common tasks such as authentication for you.

-

Auto-generated client libraries

-

The following client libraries are auto-generated using OpenAPI Generator. -Please expect very minimal support from the Argo team.

- - - - - - - - - - - - - - - - - - - - - - - - - -
LanguageClient LibraryExamples/Docs
Golangapiclient.goExample
JavaJava
PythonPython
-

Community-maintained client libraries

-

The following client libraries are provided and maintained by their authors, not the Argo team.

- - - - - - - - - - - - - - - - - - - - -
LanguageClient LibraryExamples/Docs
PythonCoulerMulti-workflow engine support Python SDK
PythonHeraEasy and accessible Argo workflows construction and submission in Python
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/client-libraries/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cluster-workflow-templates/index.html b/cluster-workflow-templates/index.html index f7d95e0dee4e..dcabfa54ecd0 100644 --- a/cluster-workflow-templates/index.html +++ b/cluster-workflow-templates/index.html @@ -1,4195 +1,11 @@ - - - + - - - - - - - - - - - - Cluster Workflow Templates - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Cluster Workflow Templates - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Cluster Workflow Templates

-
-

v2.8 and after

-
-

Introduction

-

ClusterWorkflowTemplates are cluster scoped WorkflowTemplates. ClusterWorkflowTemplate -can be created cluster scoped like ClusterRole and can be accessed across all namespaces in the cluster.

-

WorkflowTemplates documentation link

-

Defining ClusterWorkflowTemplate

-
apiVersion: argoproj.io/v1alpha1
-kind: ClusterWorkflowTemplate
-metadata:
-  name: cluster-workflow-template-whalesay-template
-spec:
-  templates:
-  - name: whalesay-template
-    inputs:
-      parameters:
-      - name: message
-    container:
-      image: docker/whalesay
-      command: [cowsay]
-      args: ["{{inputs.parameters.message}}"]
-
-

Referencing other ClusterWorkflowTemplates

-

You can reference templates from other ClusterWorkflowTemplates using a templateRef field with clusterScope: true . -Just as how you reference other templates within the same Workflow, you should do so from a steps or dag template.

-

Here is an example:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: workflow-template-hello-world-
-spec:
-  entrypoint: whalesay
-  templates:
-  - name: whalesay
-    steps:                              # You should only reference external "templates" in a "steps" or "dag" "template".
-      - - name: call-whalesay-template
-          templateRef:                  # You can reference a "template" from another "WorkflowTemplate or ClusterWorkflowTemplate" using this field
-            name: cluster-workflow-template-whalesay-template   # This is the name of the "WorkflowTemplate or ClusterWorkflowTemplate" CRD that contains the "template" you want
-            template: whalesay-template # This is the name of the "template" you want to reference
-            clusterScope: true          # This field indicates this templateRef is pointing ClusterWorkflowTemplate
-          arguments:                    # You can pass in arguments as normal
-            parameters:
-            - name: message
-              value: "hello world"
-
-
-

2.9 and after

-
-

Create Workflow from ClusterWorkflowTemplate Spec

-

You can create Workflow from ClusterWorkflowTemplate spec using workflowTemplateRef with clusterScope: true. If you pass the arguments to created Workflow, it will be merged with cluster workflow template arguments

-

Here is an example for ClusterWorkflowTemplate with entrypoint and arguments

-
apiVersion: argoproj.io/v1alpha1
-kind: ClusterWorkflowTemplate
-metadata:
-  name: cluster-workflow-template-submittable
-spec:
-  entrypoint: whalesay-template
-  arguments:
-    parameters:
-      - name: message
-        value: hello world
-  templates:
-    - name: whalesay-template
-      inputs:
-        parameters:
-          - name: message
-      container:
-        image: docker/whalesay
-        command: [cowsay]
-        args: ["{{inputs.parameters.message}}"]
-
-

Here is an example for creating ClusterWorkflowTemplate as Workflow with passing entrypoint and arguments to ClusterWorkflowTemplate

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: cluster-workflow-template-hello-world-
-spec:
-  entrypoint: whalesay-template
-  arguments:
-    parameters:
-      - name: message
-        value: "from workflow"
-  workflowTemplateRef:
-    name: cluster-workflow-template-submittable
-    clusterScope: true
-
-

Here is an example of a creating WorkflowTemplate as Workflow and using WorkflowTemplates's entrypoint and Workflow Arguments

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: cluster-workflow-template-hello-world-
-spec:
-  workflowTemplateRef:
-    name: cluster-workflow-template-submittable
-    clusterScope: true
-
-

Managing ClusterWorkflowTemplates

-

CLI

-

You can create some example templates as follows:

-
argo cluster-template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/clustertemplates.yaml
-
-

The submit a workflow using one of those templates:

-
argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/cluster-workflow-template/cluster-wftmpl-dag.yaml
-
-
-

2.7 and after

-

The submit a ClusterWorkflowTemplate as a Workflow:

-
-
argo submit --from clusterworkflowtemplate/cluster-workflow-template-submittable
-
-

kubectl

-

Using kubectl apply -f and kubectl get cwft

-

UI

-

ClusterWorkflowTemplate resources can also be managed by the UI

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cluster-workflow-templates/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/conditional-artifacts-parameters/index.html b/conditional-artifacts-parameters/index.html index 8cb0dba207ce..83f07aa7ee7f 100644 --- a/conditional-artifacts-parameters/index.html +++ b/conditional-artifacts-parameters/index.html @@ -1,4019 +1,11 @@ - - - + - - - - - - - - - - - - Conditional Artifacts and Parameters - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Conditional Artifacts and Parameters - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Conditional Artifacts and Parameters

-
-

v3.1 and after

-
-

You can set Step/DAG level artifacts or parameters based on an expression. -Use fromExpression under a Step/DAG level output artifact and expression under a Step/DAG level output parameter.

-

Conditional Artifacts

-
- name: coinflip
-  steps:
-    - - name: flip-coin
-        template: flip-coin
-    - - name: heads
-        template: heads
-        when: "{{steps.flip-coin.outputs.result}} == heads"
-      - name: tails
-        template: tails
-        when: "{{steps.flip-coin.outputs.result}} == tails"
-  outputs:
-    artifacts:
-      - name: result
-        fromExpression: "steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.artifacts.headsresult : steps.tails.outputs.artifacts.tailsresult"
-
- -

Conditional Parameters

-
    - name: coinflip
-      steps:
-        - - name: flip-coin
-            template: flip-coin
-        - - name: heads
-            template: heads
-            when: "{{steps.flip-coin.outputs.result}} == heads"
-          - name: tails
-            template: tails
-            when: "{{steps.flip-coin.outputs.result}} == tails"
-      outputs:
-        parameters:
-          - name: stepresult
-            valueFrom:
-              expression: "steps['flip-coin'].outputs.result == 'heads' ? steps.heads.outputs.result : steps.tails.outputs.result"
-
- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/conditional-artifacts-parameters/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/configure-archive-logs/index.html b/configure-archive-logs/index.html index e53cf68486c2..4f608a862648 100644 --- a/configure-archive-logs/index.html +++ b/configure-archive-logs/index.html @@ -1,4083 +1,11 @@ - - - + - - - - - - - - - - - - Configuring Archive Logs - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Configuring Archive Logs - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Configuring Archive Logs

-

⚠️ We do not recommend you rely on Argo Workflows to archive logs. Instead, use a conventional Kubernetes logging facility.

-

To enable automatic pipeline logging, you need to configure archiveLogs at workflow-controller config-map, workflow spec, or template level. You also need to configure Artifact Repository to define where this logging artifact is stored.

-

Archive logs follows priorities:

-

workflow-controller config (on) > workflow spec (on/off) > template (on/off)

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Controller Config MapWorkflow SpecTemplateare we archiving logs?
truetruetruetrue
truetruefalsetrue
truefalsetruetrue
truefalsefalsetrue
falsetruetruetrue
falsetruefalsefalse
falsefalsetruetrue
falsefalsefalsefalse
-

Configuring Workflow Controller Config Map

-

See Workflow Controller Config Map

-

Configuring Workflow Spec

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: archive-location-
-spec:
-  archiveLogs: true
-  entrypoint: whalesay
-  templates:
-  - name: whalesay
-    container:
-      image: docker/whalesay:latest
-      command: [cowsay]
-      args: ["hello world"]
-
-

Configuring Workflow Template

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: archive-location-
-spec:
-  entrypoint: whalesay
-  templates:
-  - name: whalesay
-    container:
-      image: docker/whalesay:latest
-      command: [cowsay]
-      args: ["hello world"]
-    archiveLocation:
-      archiveLogs: true
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/configure-archive-logs/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/configure-artifact-repository/index.html b/configure-artifact-repository/index.html index ee89618f8320..bdc86fc083d6 100644 --- a/configure-artifact-repository/index.html +++ b/configure-artifact-repository/index.html @@ -1,4742 +1,11 @@ - - - + - - - - - - - - - - - - Configuring Your Artifact Repository - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Configuring Your Artifact Repository - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Configuring Your Artifact Repository

-

To run Argo workflows that use artifacts, you must configure and use an artifact -repository. Argo supports any S3 compatible artifact repository such as AWS, GCS -and MinIO. This section shows how to configure the artifact repository. -Subsequent sections will show how to use it.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameInputsOutputsGarbage CollectionUsage (Feb 2020)
ArtifactoryYesYesNo11%
Azure BlobYesYesYes-
GCSYesYesYes-
GitYesNoNo-
HDFSYesYesNo3%
HTTPYesYesNo2%
OSSYesYesNo-
RawYesNoNo5%
S3YesYesYes86%
-

The actual repository used by a workflow is chosen by the following rules:

-
    -
  1. Anything explicitly configured using Artifact Repository Ref. This is the most flexible, safe, and secure option.
  2. -
  3. From a config map named artifact-repositories if it has the workflows.argoproj.io/default-artifact-repository annotation in the workflow's namespace.
  4. -
  5. From a workflow controller config-map.
  6. -
-

Configuring MinIO

-

You can install MinIO into your cluster via Helm.

-

First, install helm. Then, install MinIO with the below commands:

-
helm repo add minio https://helm.min.io/ # official minio Helm charts
-helm repo update
-helm install argo-artifacts minio/minio --set service.type=LoadBalancer --set fullnameOverride=argo-artifacts
-
-

Login to the MinIO UI using a web browser (port 9000) after obtaining the -external IP using kubectl.

-
kubectl get service argo-artifacts
-
-

On Minikube:

-
minikube service --url argo-artifacts
-
-

NOTE: When MinIO is installed via Helm, it generates -credentials, which you will use to login to the UI: -Use the commands shown below to see the credentials

-
    -
  • AccessKey: kubectl get secret argo-artifacts -o jsonpath='{.data.accesskey}' | base64 --decode
  • -
  • SecretKey: kubectl get secret argo-artifacts -o jsonpath='{.data.secretkey}' | base64 --decode
  • -
-

Create a bucket named my-bucket from the MinIO UI.

-

If MinIO is configured to use TLS you need to set the parameter insecure to false. Additionally, if MinIO is protected by certificates generated by a custom CA, you first need to save the CA certificate in a Kubernetes secret, then set the caSecret parameter accordingly. This will allow Argo to correctly verify the server certificate presented by MinIO. For example:

-
kubectl create secret generic my-root-ca --from-file=my-ca.pem
-
-
artifacts:
-  - s3:
-      insecure: false
-      caSecret:
-        name: my-root-ca
-        key: my-ca.pem
-      ...
-
-

Configuring AWS S3

-

Create your bucket and access keys for the bucket. AWS access keys have the same -permissions as the user they are associated with. In particular, you cannot -create access keys with reduced scope. If you want to limit the permissions for -an access key, you will need to create a user with just the permissions you want -to associate with the access key. Otherwise, you can just create an access key -using your existing user account.

-
$ export mybucket=bucket249
-$ cat > policy.json <<EOF
-{
-   "Version":"2012-10-17",
-   "Statement":[
-      {
-         "Effect":"Allow",
-         "Action":[
-            "s3:PutObject",
-            "s3:GetObject",
-            "s3:DeleteObject"
-         ],
-         "Resource":"arn:aws:s3:::$mybucket/*"
-      },
-      {
-         "Effect":"Allow",
-         "Action":[
-            "s3:ListBucket"
-         ],
-         "Resource":"arn:aws:s3:::$mybucket"
-      }
-   ]
-}
-EOF
-$ aws s3 mb s3://$mybucket [--region xxx]
-$ aws iam create-user --user-name $mybucket-user
-$ aws iam put-user-policy --user-name $mybucket-user --policy-name $mybucket-policy --policy-document file://policy.json
-$ aws iam create-access-key --user-name $mybucket-user > access-key.json
-
-

If you do not have Artifact Garbage Collection configured, you should remove s3:DeleteObject from the list of Actions above.

-

NOTE: if you want argo to figure out which region your buckets belong in, you -must additionally set the following statement policy. Otherwise, you must -specify a bucket region in your workflow configuration.

-
      {
-         "Effect":"Allow",
-         "Action":[
-            "s3:GetBucketLocation"
-         ],
-         "Resource":"arn:aws:s3:::*"
-      }
-    ...
-
-

AWS S3 IRSA

-

If you wish to use S3 IRSA instead of passing in an accessKey and secretKey, you need to annotate the service account of both the running workflow (in order to save logs/artifacts) and the argo-server pod (in order to retrieve the logs/artifacts).

-
apiVersion: v1
-kind: ServiceAccount
-metadata:
-  annotations:
-    eks.amazonaws.com/role-arn: arn:aws:iam::012345678901:role/mybucket
-  name: myserviceaccount
-  namespace: mynamespace
-
-

Configuring GCS (Google Cloud Storage)

-

Create a bucket from the GCP Console -(https://console.cloud.google.com/storage/browser).

-

There are 2 ways to configure a Google Cloud Storage.

-

Through Native GCS APIs

-
    -
  • Create and download a Google Cloud service account key.
  • -
  • Create a kubernetes secret to store the key.
  • -
  • Configure gcs artifact as following in the yaml.
  • -
-
artifacts:
-  - name: message
-    path: /tmp/message
-    gcs:
-      bucket: my-bucket-name
-      key: path/in/bucket
-      # serviceAccountKeySecret is a secret selector.
-      # It references the k8s secret named 'my-gcs-credentials'.
-      # This secret is expected to have have the key 'serviceAccountKey',
-      # containing the base64 encoded credentials
-      # to the bucket.
-      #
-      # If it's running on GKE and Workload Identity is used,
-      # serviceAccountKeySecret is not needed.
-      serviceAccountKeySecret:
-        name: my-gcs-credentials
-        key: serviceAccountKey
-
-

If it's a GKE cluster, and Workload Identity is configured, there's no need to -create the service account key and store it as a Kubernetes secret, -serviceAccountKeySecret is also not needed in this case. Please follow the -link to configure Workload Identity -(https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity).

-

Use S3 APIs

-

Enable S3 compatible access and create an access key. Note that S3 compatible -access is on a per project rather than per bucket basis.

- -
artifacts:
-  - name: my-output-artifact
-    path: /my-output-artifact
-    s3:
-      endpoint: storage.googleapis.com
-      bucket: my-gcs-bucket-name
-      # NOTE that, by default, all output artifacts are automatically tarred and
-      # gzipped before saving. So as a best practice, .tgz or .tar.gz
-      # should be incorporated into the key name so the resulting file
-      # has an accurate file extension.
-      key: path/in/bucket/my-output-artifact.tgz
-      accessKeySecret:
-        name: my-gcs-s3-credentials
-        key: accessKey
-      secretKeySecret:
-        name: my-gcs-s3-credentials
-        key: secretKey
-
-

Configuring Alibaba Cloud OSS (Object Storage Service)

-

Create your bucket and access key for the bucket. Suggest to limit the permission -for the access key, you will need to create a user with just the permissions you -want to associate with the access key. Otherwise, you can just create an access key -using your existing user account.

-

Setup Alibaba Cloud CLI -and follow the steps to configure the artifact storage for your workflow:

-
$ export mybucket=bucket-workflow-artifect
-$ export myregion=cn-zhangjiakou
-$ # limit permission to read/write the bucket.
-$ cat > policy.json <<EOF
-{
-    "Version": "1",
-    "Statement": [
-        {
-            "Effect": "Allow",
-            "Action": [
-              "oss:PutObject",
-              "oss:GetObject"
-            ],
-            "Resource": "acs:oss:*:*:$mybucket/*"
-        }
-    ]
-}
-EOF
-$ # create bucket.
-$ aliyun oss mb oss://$mybucket --region $myregion
-$ # show endpoint of bucket.
-$ aliyun oss stat oss://$mybucket
-$ #create a ram user to access bucket.
-$ aliyun ram CreateUser --UserName $mybucket-user
-$ # create ram policy with the limit permission.
-$ aliyun ram CreatePolicy --PolicyName $mybucket-policy --PolicyDocument "$(cat policy.json)"
-$ # attch ram policy to the ram user.
-$ aliyun ram AttachPolicyToUser --UserName $mybucket-user --PolicyName $mybucket-policy --PolicyType Custom
-$ # create access key and secret key for the ram user.
-$ aliyun ram CreateAccessKey --UserName $mybucket-user > access-key.json
-$ # create secret in demo namespace, replace demo with your namespace.
-$ kubectl create secret generic $mybucket-credentials -n demo\
-  --from-literal "accessKey=$(cat access-key.json | jq -r .AccessKey.AccessKeyId)" \
-  --from-literal "secretKey=$(cat access-key.json | jq -r .AccessKey.AccessKeySecret)"
-$ # create configmap to config default artifact for a namespace.
-$ cat > default-artifact-repository.yaml << EOF
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  # If you want to use this config map by default, name it "artifact-repositories". Otherwise, you can provide a reference to a
-  # different config map in `artifactRepositoryRef.configMap`.
-  name: artifact-repositories
-  annotations:
-    # v3.0 and after - if you want to use a specific key, put that key into this annotation.
-    workflows.argoproj.io/default-artifact-repository: default-oss-artifact-repository
-data:
-  default-oss-artifact-repository: |
-    oss:
-      endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com
-      bucket: $mybucket
-      # accessKeySecret and secretKeySecret are secret selectors.
-      # It references the k8s secret named 'bucket-workflow-artifect-credentials'.
-      # This secret is expected to have have the keys 'accessKey'
-      # and 'secretKey', containing the base64 encoded credentials
-      # to the bucket.
-      accessKeySecret:
-        name: $mybucket-credentials
-        key: accessKey
-      secretKeySecret:
-        name: $mybucket-credentials
-        key: secretKey
-EOF
-# create cm in demo namespace, replace demo with your namespace.
-$ k apply -f default-artifact-repository.yaml -n demo
-
-

You can also set createBucketIfNotPresent to true to tell the artifact driver to automatically create the OSS bucket if it doesn't exist yet when saving artifacts. Note that you'll need to set additional permission for your OSS account to create new buckets.

-

Alibaba Cloud OSS RRSA

-

If you wish to use OSS RRSA instead of passing in an accessKey and secretKey, you need to perform the following actions:

-
    -
  • Install pod-identity-webhook in your cluster to automatically inject the OIDC tokens and environment variables.
  • -
  • Add the label pod-identity.alibabacloud.com/injection: 'on' to the target workflow namespace.
  • -
  • Add the annotation pod-identity.alibabacloud.com/role-name: $your_ram_role_name to the service account of running workflow.
  • -
  • Set useSDKCreds: true in your target artifact repository cm and remove the secret references to AK/SK.
  • -
-
apiVersion: v1
-kind: Namespace
-metadata:
-  name: my-ns
-  labels:
-    pod-identity.alibabacloud.com/injection: 'on'
-
----
-apiVersion: v1
-kind: ServiceAccount
-metadata:
-  name: my-sa
-  namespace: rrsa-demo
-  annotations:
-    pod-identity.alibabacloud.com/role-name: $your_ram_role_name
-
----
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  # If you want to use this config map by default, name it "artifact-repositories". Otherwise, you can provide a reference to a
-  # different config map in `artifactRepositoryRef.configMap`.
-  name: artifact-repositories
-  annotations:
-    # v3.0 and after - if you want to use a specific key, put that key into this annotation.
-    workflows.argoproj.io/default-artifact-repository: default-oss-artifact-repository
-data:
-  default-oss-artifact-repository: |
-    oss:
-      endpoint: http://oss-cn-zhangjiakou-internal.aliyuncs.com
-      bucket: $mybucket
-      useSDKCreds: true
-
-

Configuring Azure Blob Storage

-

Create an Azure Storage account and a container within that account. There are a number of -ways to accomplish this, including the Azure Portal or the -CLI.

-
    -
  1. Retrieve the blob service endpoint for the storage account. For example:
  2. -
-
az storage account show -n mystorageaccountname --query 'primaryEndpoints.blob' -otsv
-
-
    -
  1. Retrieve the access key for the storage account. For example:
  2. -
-
az storage account keys list -n mystorageaccountname --query '[0].value' -otsv
-
-
    -
  1. Create a kubernetes secret to hold the storage account key. For example:
  2. -
-
kubectl create secret generic my-azure-storage-credentials \
-  --from-literal "account-access-key=$(az storage account keys list -n mystorageaccountname --query '[0].value' -otsv)"
-
-
    -
  1. Configure azure artifact as following in the yaml.
  2. -
-
artifacts:
-  - name: message
-    path: /tmp/message
-    azure:
-      endpoint: https://mystorageaccountname.blob.core.windows.net
-      container: my-container-name
-      blob: path/in/container
-      # accountKeySecret is a secret selector.
-      # It references the k8s secret named 'my-azure-storage-credentials'.
-      # This secret is expected to have have the key 'account-access-key',
-      # containing the base64 encoded credentials to the storage account.
-      #
-      # If a managed identity has been assigned to the machines running the
-      # workflow (e.g., https://docs.microsoft.com/en-us/azure/aks/use-managed-identity)
-      # then accountKeySecret is not needed, and useSDKCreds should be
-      # set to true instead:
-      # useSDKCreds: true
-      accountKeySecret:
-        name: my-azure-storage-credentials
-        key: account-access-key     
-
-

If useSDKCreds is set to true, then the accountKeySecret value is not -used and authentication with Azure will be attempted using a -DefaultAzureCredential -instead.

-

Configure the Default Artifact Repository

-

In order for Argo to use your artifact repository, you can configure it as the -default repository. Edit the workflow-controller config map with the correct -endpoint and access/secret keys for your repository.

-

S3 compatible artifact repository bucket (such as AWS, GCS, MinIO, and Alibaba Cloud OSS)

-

Use the endpoint corresponding to your provider:

-
    -
  • AWS: s3.amazonaws.com
  • -
  • GCS: storage.googleapis.com
  • -
  • MinIO: my-minio-endpoint.default:9000
  • -
  • Alibaba Cloud OSS: oss-cn-hangzhou-zmf.aliyuncs.com
  • -
-

The key is name of the object in the bucket The accessKeySecret and -secretKeySecret are secret selectors that reference the specified kubernetes -secret. The secret is expected to have the keys accessKey and secretKey, -containing the base64 encoded credentials to the bucket.

-

For AWS, the accessKeySecret and secretKeySecret correspond to -AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY respectively.

-

EC2 provides a meta-data API via which applications using the AWS SDK may assume -IAM roles associated with the instance. If you are running argo on EC2 and the -instance role allows access to your S3 bucket, you can configure the workflow -step pods to assume the role. To do so, simply omit the accessKeySecret and -secretKeySecret fields.

-

For GCS, the accessKeySecret and secretKeySecret for S3 compatible access -can be obtained from the GCP Console. Note that S3 compatible access is on a per -project rather than per bucket basis.

- -

For MinIO, the accessKeySecret and secretKeySecret naturally correspond the -AccessKey and SecretKey.

-

For Alibaba Cloud OSS, the accessKeySecret and secretKeySecret corresponds to -accessKeyID and accessKeySecret respectively.

-

Example:

-
$ kubectl edit configmap workflow-controller-configmap -n argo  # assumes argo was installed in the argo namespace
-...
-data:
-  artifactRepository: |
-    s3:
-      bucket: my-bucket
-      keyFormat: prefix/in/bucket     #optional
-      endpoint: my-minio-endpoint.default:9000        #AWS => s3.amazonaws.com; GCS => storage.googleapis.com
-      insecure: true                  #omit for S3/GCS. Needed when minio runs without TLS
-      accessKeySecret:                #omit if accessing via AWS IAM
-        name: my-minio-cred
-        key: accessKey
-      secretKeySecret:                #omit if accessing via AWS IAM
-        name: my-minio-cred
-        key: secretKey
-      useSDKCreds: true               #tells argo to use AWS SDK's default provider chain, enable for things like IRSA support
-
-

The secrets are retrieved from the namespace you use to run your workflows. Note -that you can specify a keyFormat.

-

Google Cloud Storage (GCS)

-

Argo also can use native GCS APIs to access a Google Cloud Storage bucket.

-

serviceAccountKeySecret references to a Kubernetes secret which stores a Google Cloud -service account key to access the bucket.

-

Example:

-
$ kubectl edit configmap workflow-controller-configmap -n argo  # assumes argo was installed in the argo namespace
-...
-data:
-  artifactRepository: |
-    gcs:
-      bucket: my-bucket
-      keyFormat: prefix/in/bucket/{{workflow.name}}/{{pod.name}}     #it should reference workflow variables, such as "{{workflow.name}}/{{pod.name}}"
-      serviceAccountKeySecret:
-        name: my-gcs-credentials
-        key: serviceAccountKey
-
-

Azure Blob Storage

-

Argo can use native Azure APIs to access a Azure Blob Storage container.

-

accountKeySecret references to a Kubernetes secret which stores an Azure Blob -Storage account shared key to access the container.

-

Example:

-
$ kubectl edit configmap workflow-controller-configmap -n argo  # assumes argo was installed in the argo namespace
-...
-data:
-  artifactRepository: |
-    azure:
-      container: my-container
-      blobNameFormat: prefix/in/container     #optional, it could reference workflow variables, such as "{{workflow.name}}/{{pod.name}}"
-      accountKeySecret:
-        name: my-azure-storage-credentials
-        key: account-access-key
-
-

Accessing Non-Default Artifact Repositories

-

This section shows how to access artifacts from non-default artifact -repositories.

-

The endpoint, accessKeySecret and secretKeySecret are the same as for -configuring the default artifact repository described previously.

-
  templates:
-  - name: artifact-example
-    inputs:
-      artifacts:
-      - name: my-input-artifact
-        path: /my-input-artifact
-        s3:
-          endpoint: s3.amazonaws.com
-          bucket: my-aws-bucket-name
-          key: path/in/bucket/my-input-artifact.tgz
-          accessKeySecret:
-            name: my-aws-s3-credentials
-            key: accessKey
-          secretKeySecret:
-            name: my-aws-s3-credentials
-            key: secretKey
-    outputs:
-      artifacts:
-      - name: my-output-artifact
-        path: /my-output-artifact
-        s3:
-          endpoint: storage.googleapis.com
-          bucket: my-gcs-bucket-name
-          # NOTE that, by default, all output artifacts are automatically tarred and
-          # gzipped before saving. So as a best practice, .tgz or .tar.gz
-          # should be incorporated into the key name so the resulting file
-          # has an accurate file extension.
-          key: path/in/bucket/my-output-artifact.tgz
-          accessKeySecret:
-            name: my-gcs-s3-credentials
-            key: accessKey
-          secretKeySecret:
-            name: my-gcs-s3-credentials
-            key: secretKey
-          region: my-GCS-storage-bucket-region
-    container:
-      image: debian:latest
-      command: [sh, -c]
-      args: ["cp -r /my-input-artifact /my-output-artifact"]
-
-

Artifact Streaming

-

With artifact streaming, artifacts don’t need to be saved to disk first. Artifact streaming is only supported in the following -artifact drivers: S3 (v3.4+), Azure Blob (v3.4+), HTTP (v3.5+), and Artifactory (v3.5+).

-

Previously, when a user would click the button to download an artifact in the UI, the artifact would need to be written to the -Argo Server’s disk first before downloading. If many users tried to download simultaneously, they would take up -disk space and fail the download.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/configure-artifact-repository/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/container-set-template/index.html b/container-set-template/index.html index e2107442955a..db5543e15e3c 100644 --- a/container-set-template/index.html +++ b/container-set-template/index.html @@ -1,4071 +1,11 @@ - - - + - - - - - - - - - - - - Container Set Template - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Container Set Template - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Container Set Template

-
-

v3.1 and after

-
-

A container set templates is similar to a normal container or script template, but allows you to specify multiple -containers to run within a single pod.

-

Because you have multiple containers within a pod, they will be scheduled on the same host. You can use cheap and fast -empty-dir volumes instead of persistent volume claims to share data between steps.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: container-set-template-
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      volumes:
-        - name: workspace
-          emptyDir: { }
-      containerSet:
-        volumeMounts:
-          - mountPath: /workspace
-            name: workspace
-        containers:
-          - name: a
-            image: argoproj/argosay:v2
-            command: [sh, -c]
-            args: ["echo 'a: hello world' >> /workspace/message"]
-          - name: b
-            image: argoproj/argosay:v2
-            command: [sh, -c]
-            args: ["echo 'b: hello world' >> /workspace/message"]
-          - name: main
-            image: argoproj/argosay:v2
-            command: [sh, -c]
-            args: ["echo 'main: hello world' >> /workspace/message"]
-            dependencies:
-              - a
-              - b
-      outputs:
-        parameters:
-          - name: message
-            valueFrom:
-              path: /workspace/message
-
-

There are a couple of caveats:

-
    -
  1. You must use the Emissary Executor.
  2. -
  3. Or all containers must run in parallel - i.e. it is a graph with no dependencies.
  4. -
  5. You cannot use enhanced depends logic.
  6. -
  7. It will use the sum total of all resource requests, maybe costing more than the same DAG template. This will be a problem if your requests already cost a lot. See below.
  8. -
-

The containers can be arranged as a graph by specifying dependencies. This is suitable for running 10s rather than 100s -of containers.

-

Inputs and Outputs

-

As with the container and script templates, inputs and outputs can only be loaded and saved from a container -named main.

-

All container set templates that have artifacts must/should have a container named main.

-

If you want to use base-layer artifacts, main must be last to finish, so it must be the root node in the graph.

-

That is may not be practical.

-

Instead, have a workspace volume and make sure all artifacts paths are on that volume.

-

⚠️ Resource Requests

-

A container set actually starts all containers, and the Emissary only starts the main container process when the containers it depends on have completed. This mean that even though the container is doing no useful work, it is still consuming resources and you're still getting billed for them.

-

If your requests are small, this won't be a problem.

-

If your requests are large, set the resource requests so the sum total is the most you'll need at once.

-

Example A: a simple sequence e.g. a -> b -> c

-
    -
  • a needs 1Gi memory
  • -
  • b needs 2Gi memory
  • -
  • c needs 1Gi memory
  • -
-

Then you know you need only a maximum of 2Gi. You could set as follows:

-
    -
  • a requests 512Mi memory
  • -
  • b requests 1Gi memory
  • -
  • c requests 512Mi memory
  • -
-

The total is 2Gi, which is enough for b. We're all good.

-

Example B: Diamond DAG e.g. a diamond a -> b -> d and a -> c -> d, i.e. b and c run at the same time.

-
    -
  • a needs 1000 cpu
  • -
  • b needs 2000 cpu
  • -
  • c needs 1000 cpu
  • -
  • d needs 1000 cpu
  • -
-

I know that b and c will run at the same time. So I need to make sure the total is 3000.

-
    -
  • a requests 500 cpu
  • -
  • b requests 1000 cpu
  • -
  • c requests 1000 cpu
  • -
  • d requests 500 cpu
  • -
-

The total is 3000, which is enough for b + c. We're all good.

-

Example B: Lopsided requests, e.g. a -> b where a is cheap and b is expensive

-
    -
  • a needs 100 cpu, 1Mi memory, runs for 10h
  • -
  • b needs 8Ki GPU, 100 Gi memory, 200 Ki GPU, runs for 5m
  • -
-

Can you see the problem here? a only has small requests, but the container set will use the total of all requests. So it's as if you're using all that GPU for 10h. This will be expensive.

-

Solution: do not use container set when you have lopsided requests.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/container-set-template/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cost-optimisation/index.html b/cost-optimisation/index.html index 6b492ac7b6a9..3266f7478a8d 100644 --- a/cost-optimisation/index.html +++ b/cost-optimisation/index.html @@ -1,4185 +1,11 @@ - - - + - - - - - - - - - - - - Cost Optimization - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Cost Optimization - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Cost Optimization

-

User Cost Optimizations

-

Suggestions for users running workflows.

-

Set The Workflows Pod Resource Requests

-
-

Suitable if you are running a workflow with many homogeneous pods.

-
-

Resource duration shows the amount of CPU and memory requested by a pod and is indicative of the cost. You can use this to find costly steps within your workflow.

-

Smaller requests can be set in the pod spec patch's resource requirements.

-

Use A Node Selector To Use Cheaper Instances

-

You can use a node selector for cheaper instances, e.g. spot instances:

-
nodeSelector:
-  "node-role.kubernetes.io/argo-spot-worker": "true"
-
-

Consider trying Volume Claim Templates or Volumes instead of Artifacts

-
-

Suitable if you have a workflow that passes a lot of artifacts within itself.

-
-

Copying artifacts to and from storage outside of a cluster can be expensive. The correct choice is dependent on what your artifact storage provider is vs. what volume they are using. For example, we believe it may be more expensive to allocate and delete a new block storage volume (AWS EBS, GCP persistent disk) every workflow using the PVC feature, than it is to upload and download some small files to object storage (AWS S3, GCP cloud storage).

-

On the other hand if you are using a NFS volume shared between all your workflows with large artifacts, that might be cheaper than the data transfer and storage costs of object storage.

-

Consider:

-
    -
  • Data transfer costs (upload/download vs. copying)
  • -
  • Data storage costs (object storage vs. volume)
  • -
  • Requirement for parallel access to data (NFS vs. block storage vs. artifact)
  • -
-

When using volume claims, consider configuring Volume Claim GC. By default, claims are only deleted when a workflow is successful.

-

Limit The Total Number Of Workflows And Pods

-
-

Suitable for all.

-
-

A workflow (and for that matter, any Kubernetes resource) will incur a cost as long as it exists in your cluster, even after it's no longer running.

-

The workflow controller memory and CPU needs to increase linearly with the number of pods and workflows you are currently running.

-

You should delete workflows once they are no longer needed. -You can enable the Workflow Archive to continue viewing them after they are removed from Kubernetes.

-

Limit the total number of workflows using:

-
    -
  • Active Deadline Seconds - terminate running workflows that do not complete in a set time. This will make sure workflows do not run forever.
  • -
  • Workflow TTL Strategy - delete completed workflows after a set time.
  • -
  • Pod GC - delete completed pods. By default, Pods are not deleted.
  • -
  • CronWorkflow history limits - delete successful or failed workflows which exceed the limit.
  • -
-

Example

-
spec:
-  # must complete in 8h (28,800 seconds)
-  activeDeadlineSeconds: 28800
-  # keep workflows for 1d (86,400 seconds)
-  ttlStrategy:
-    secondsAfterCompletion: 86400
-  # delete all pods as soon as they complete
-  podGC:
-    strategy: OnPodCompletion
-
-

You can set these configurations globally using Default Workflow Spec.

-

Changing these settings will not delete workflows that have already run. To list old workflows:

-
argo list --completed --since 7d
-
-
-

v2.9 and after

-
-

To list/delete workflows completed over 7 days ago:

-
argo list --older 7d
-argo delete --older 7d
-
-

Operator Cost Optimizations

-

Suggestions for operators who installed Argo Workflows.

-

Set Resources Requests and Limits

-
-

Suitable if you have many instances, e.g. on dozens of clusters or namespaces.

-
-

Set resource requests and limits for the workflow-controller and argo-server, e.g.

-
requests:
-  cpu: 100m
-  memory: 64Mi
-limits:
-  cpu: 500m
-  memory: 128Mi
-
-

This above limit is suitable for the Argo Server, as this is stateless. The Workflow Controller is stateful and will scale to the number of live workflows - so you are likely to need higher values.

-

Configure Executor Resource Requests

-
-

Suitable for all - unless you have large artifacts.

-
-

Configure workflow-controller-configmap.yaml to set the executor.resources:

-
executor: |
-  resources:
-    requests:
-      cpu: 100m
-      memory: 64Mi
-    limits:
-      cpu: 500m
-      memory: 512Mi
-
-

The correct values depend on the size of artifacts your workflows download. For artifacts > 10GB, memory usage may be large - #1322.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cost-optimisation/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cron-backfill/index.html b/cron-backfill/index.html index 54320937d8e2..8faeff9ffb6e 100644 --- a/cron-backfill/index.html +++ b/cron-backfill/index.html @@ -1,3989 +1,11 @@ - - - + - - - - - - - - - - - - Cron Backfill - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Cron Backfill - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Cron Backfill

-

Use Case

-
    -
  • You are using cron workflows to run daily jobs, you may need to re-run for a date, or run some historical days.
  • -
-

Solution

-
    -
  1. Create a workflow template for your daily job.
  2. -
  3. Create your cron workflow to run daily and invoke that template.
  4. -
  5. Create a backfill workflow that uses withSequence to run the job for each date.
  6. -
-

This full example contains:

-
    -
  • A workflow template named job.
  • -
  • A cron workflow named daily-job.
  • -
  • A workflow named backfill-v1 that uses a resource template to create one workflow for each backfill date.
  • -
  • A alternative workflow named backfill-v2 that uses a steps templates to run one task for each backfill date.
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cron-backfill/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/cron-workflows/index.html b/cron-workflows/index.html index a422dcd66356..25434662bb4f 100644 --- a/cron-workflows/index.html +++ b/cron-workflows/index.html @@ -1,4413 +1,11 @@ - - - + - - - - - - - - - - - - Cron Workflows - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Cron Workflows - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Cron Workflows

-
-

v2.5 and after

-
-

Introduction

-

CronWorkflow are workflows that run on a preset schedule. They are designed to be converted from Workflow easily and to mimic the same options as Kubernetes CronJob. In essence, CronWorkflow = Workflow + some specific cron options.

-

CronWorkflow Spec

-

An example CronWorkflow spec would look like:

-
apiVersion: argoproj.io/v1alpha1
-kind: CronWorkflow
-metadata:
-  name: test-cron-wf
-spec:
-  schedule: "* * * * *"
-  concurrencyPolicy: "Replace"
-  startingDeadlineSeconds: 0
-  workflowSpec:
-    entrypoint: whalesay
-    templates:
-    - name: whalesay
-      container:
-        image: alpine:3.6
-        command: [sh, -c]
-        args: ["date; sleep 90"]
-
-

workflowSpec and workflowMetadata

-

CronWorkflow.spec.workflowSpec is the same type as Workflow.spec and serves as a template for Workflow objects that are created from it. Everything under this spec will be converted to a Workflow.

-

The resulting Workflow name will be a generated name based on the CronWorkflow name. In this example it could be something like test-cron-wf-tj6fe.

-

CronWorkflow.spec.workflowMetadata can be used to add labels and annotations.

-

CronWorkflow Options

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Option NameDefault ValueDescription
scheduleNone, must be providedSchedule at which the Workflow will be run. E.g. 5 4 * * *
timezoneMachine timezoneTimezone during which the Workflow will be run from the IANA timezone standard, e.g. America/Los_Angeles
suspendfalseIf true Workflow scheduling will not occur. Can be set from the CLI, GitOps, or directly
concurrencyPolicyAllowPolicy that determines what to do if multiple Workflows are scheduled at the same time. Available options: Allow: allow all, Replace: remove all old before scheduling a new, Forbid: do not allow any new while there are old
startingDeadlineSeconds0Number of seconds after the last successful run during which a missed Workflow will be run
successfulJobsHistoryLimit3Number of successful Workflows that will be persisted at a time
failedJobsHistoryLimit1Number of failed Workflows that will be persisted at a time
-

Cron Schedule Syntax

-

The cron scheduler uses the standard cron syntax, as documented on Wikipedia.

-

More detailed documentation for the specific library used is documented here.

-

Crash Recovery

-

If the workflow-controller crashes (and hence the CronWorkflow controller), there are some options you can set to ensure that CronWorkflows that would have been scheduled while the controller was down can still run. Mainly startingDeadlineSeconds can be set to specify the maximum number of seconds past the last successful run of a CronWorkflow during which a missed run will still be executed.

-

For example, if a CronWorkflow that runs every minute is last run at 12:05:00, and the controller crashes between 12:05:55 and 12:06:05, then the expected execution time of 12:06:00 would be missed. However, if startingDeadlineSeconds is set to a value greater than 65 (the amount of time passing between the last scheduled run time of 12:05:00 and the current controller restart time of 12:06:05), then a single instance of the CronWorkflow will be executed exactly at 12:06:05.

-

Currently only a single instance will be executed as a result of setting startingDeadlineSeconds.

-

This setting can also be configured in tandem with concurrencyPolicy to achieve more fine-tuned control.

-

Daylight Saving

-

Daylight Saving (DST) is taken into account when using timezone. This means that, depending on the local time of the scheduled job, argo will schedule the workflow once, twice, or not at all when the clock moves forward or back.

-

For example, with timezone set at America/Los_Angeles, we have daylight saving

-
    -
  • -

    +1 hour (DST start) at 2020-03-08 02:00:00:

    -

    Note: The schedules between 02:00 a.m. to 02:59 a.m. were skipped on Mar 8th due to the clock being moved forward:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    cronsequenceworkflow execution time
    59 1 ** *12020-03-08 01:59:00 -0800 PST
    22020-03-09 01:59:00 -0700 PDT
    32020-03-10 01:59:00 -0700 PDT
    0 2 ** *12020-03-09 02:00:00 -0700 PDT
    22020-03-10 02:00:00 -0700 PDT
    32020-03-11 02:00:00 -0700 PDT
    1 2 ** *12020-03-09 02:01:00 -0700 PDT
    22020-03-10 02:01:00 -0700 PDT
    32020-03-11 02:01:00 -0700 PDT
    -
  • -
  • -

    -1 hour (DST end) at 2020-11-01 02:00:00:

    -

    Note: the schedules between 01:00 a.m. to 01:59 a.m. were triggered twice on Nov 1st due to the clock being set back:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    cronsequenceworkflow execution time
    59 1 ** *12020-11-01 01:59:00 -0700 PDT
    22020-11-01 01:59:00 -0800 PST
    32020-11-02 01:59:00 -0800 PST
    0 2 ** *12020-11-01 02:00:00 -0800 PST
    22020-11-02 02:00:00 -0800 PST
    32020-11-03 02:00:00 -0800 PST
    1 2 ** *12020-11-01 02:01:00 -0800 PST
    22020-11-02 02:01:00 -0800 PST
    32020-11-03 02:01:00 -0800 PST
    -
  • -
-

Managing CronWorkflow

-

CLI

-

CronWorkflow can be created from the CLI by using basic commands:

-
$ argo cron create cron.yaml
-Name:                          test-cron-wf
-Namespace:                     argo
-Created:                       Mon Nov 18 10:17:06 -0800 (now)
-Schedule:                      * * * * *
-Suspended:                     false
-StartingDeadlineSeconds:       0
-ConcurrencyPolicy:             Forbid
-
-$ argo cron list
-NAME           AGE   LAST RUN   SCHEDULE    SUSPENDED
-test-cron-wf   49s   N/A        * * * * *   false
-
-# some time passes
-
-$ argo cron list
-NAME           AGE   LAST RUN   SCHEDULE    SUSPENDED
-test-cron-wf   56s   2s         * * * * *   false
-
-$ argo cron get test-cron-wf
-Name:                          test-cron-wf
-Namespace:                     argo
-Created:                       Wed Oct 28 07:19:02 -0600 (23 hours ago)
-Schedule:                      * * * * *
-Suspended:                     false
-StartingDeadlineSeconds:       0
-ConcurrencyPolicy:             Replace
-LastScheduledTime:             Thu Oct 29 06:51:00 -0600 (11 minutes ago)
-NextScheduledTime:             Thu Oct 29 13:03:00 +0000 (32 seconds from now)
-Active Workflows:              test-cron-wf-rt4nf
-
-

Note: NextScheduledRun assumes that the workflow-controller uses UTC as its timezone

-

kubectl

-

Using kubectl apply -f and kubectl get cwf

-

Back-Filling Days

-

See cron backfill.

-

GitOps via Argo CD

-

CronWorkflow resources can be managed with GitOps by using Argo CD

-

UI

-

CronWorkflow resources can also be managed by the UI

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/cron-workflows/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/data-sourcing-and-transformation/index.html b/data-sourcing-and-transformation/index.html index 1f892af6d18e..86cc5ed1087b 100644 --- a/data-sourcing-and-transformation/index.html +++ b/data-sourcing-and-transformation/index.html @@ -1,4019 +1,11 @@ - - - + - - - - - - - - - - - - Data Sourcing and Transformations - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Data Sourcing and Transformations - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Data Sourcing and Transformations

-
-

v3.1 and after

-
-

We have intentionally made this feature available with only bare-bones functionality. Our hope is that we are able to build this feature with our community's feedback. If you have ideas and use cases for this feature, please open an enhancement proposal on GitHub.

-

Additionally, please take a look at our current ideas at the bottom of this document.

-

Introduction

-

Users often source and transform data as part of their workflows. The data template provides first-class support for these common operations.

-

data templates can best be understood by looking at a common data sourcing and transformation operation in bash:

-
find -r . | grep ".pdf" | sed "s/foo/foo.ready/"
-
-

Such operations consist of two main parts:

-
    -
  • A "source" of data: find -r .
  • -
  • A series of "transformations" which transform the output of the source serially: | grep ".pdf" | sed "s/foo/foo.ready/"
  • -
-

This operation, for example, could be useful in sourcing a potential list of files to be processed and filtering and manipulating the list as desired.

-

In Argo, this operation would be written as:

-
- name: generate-artifacts
-  data:
-    source:             # Define a source for the data, only a single "source" is permitted
-      artifactPaths:    # A predefined source: Generate a list of all artifact paths in a given repository
-        s3:             # Source from an S3 bucket
-          bucket: test
-          endpoint: minio:9000
-          insecure: true
-          accessKeySecret:
-            name: my-minio-cred
-            key: accesskey
-          secretKeySecret:
-            name: my-minio-cred
-            key: secretkey
-    transformation:     # The source is then passed to be transformed by transformations defined here
-      - expression: "filter(data, {# endsWith \".pdf\"})"
-      - expression: "map(data, {# + \".ready\"})"
-
-

Spec

-

A data template must always contain a source. Current available sources:

-
    -
  • artifactPaths: generates a list of artifact paths from the artifact repository specified
  • -
-

A data template may contain any number of transformations (or zero). The transformations will be applied serially in order. Current available transformations:

-
    -
  • -

    expression: an expr expression. See language definition here. When defining expr expressions Argo will pass the available data to the environment as a variable called data (see example above).

    -

    We understand that the expression transformation is limited. We intend to greatly expand the functionality of this template with our community's feedback. Please see the link at the top of this document to submit ideas or use cases for this feature.

    -
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/data-sourcing-and-transformation/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/debug-pause/index.html b/debug-pause/index.html index bc6c6836bac8..b766154801cd 100644 --- a/debug-pause/index.html +++ b/debug-pause/index.html @@ -1,4024 +1,11 @@ - - - + - - - - - - - - - - - - Debug Pause - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Debug Pause - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Debug Pause

-
-

v3.3 and after

-
-

Introduction

-

The debug pause feature makes it possible to pause individual workflow steps for debugging before, after or both and then release the steps from the paused state. Currently this feature is only supported when using the Emissary Executor

-

In order to pause a container env variables are used:

-
    -
  • ARGO_DEBUG_PAUSE_AFTER - to pause a step after execution
  • -
  • ARGO_DEBUG_PAUSE_BEFORE - to pause a step before execution
  • -
-

Example workflow:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: pause-after-
-spec:
-  entrypoint: whalesay
-  templates:
-    - name: whalesay
-      container:
-        image: argoproj/argosay:v2
-        env:
-          - name: ARGO_DEBUG_PAUSE_AFTER
-            value: 'true'
-
-

In order to release a step from a pause state, marker files are used named /var/run/argo/ctr/main/after or /var/run/argo/ctr/main/before corresponding to when the step is paused. Pausing steps can be used together with ephemeral containers when a shell is not available in the used container.

-

Example

-

1) Create a workflow where the debug pause env in set, in this example ARGO_DEBUG_PAUSE_AFTER will be set and thus the step will be paused after execution of the user code.

-

pause-after.yaml

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: pause-after-
-spec:
-  entrypoint: whalesay
-  templates:
-    - name: whalesay
-      container:
-        image: argoproj/argosay:v2
-        env:
-          - name: ARGO_DEBUG_PAUSE_AFTER
-            value: 'true'
-
-
argo submit -n argo --watch pause-after.yaml
-
-

Create a shell in the container of interest of create a ephemeral container in the pod, in this example ephemeral containers are used.

-
kubectl debug -n argo -it POD_NAME --image=busybox --target=main --share-processes
-
-

In order to have access to the persistence volume used by the workflow step, --share-processes will have to be used.

-

The ephemeral container can be used to perform debugging operations. When debugging has been completed, create the marker file to allow the workflow step to continue. When using process name space sharing container file systems are visible to other containers in the pod through the /proc/$pid/root link.

-
touch /proc/1/root/run/argo/ctr/main/after
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/debug-pause/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/default-workflow-specs/index.html b/default-workflow-specs/index.html index 2e996e8b631e..fdd198aa3996 100644 --- a/default-workflow-specs/index.html +++ b/default-workflow-specs/index.html @@ -1,4014 +1,11 @@ - - - + - - - - - - - - - - - - Default Workflow Spec - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Default Workflow Spec - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Default Workflow Spec

-
-

v2.7 and after

-
-

Introduction

-

Default Workflow spec values can be set at the controller config map that will apply to all Workflows executed from said controller. -If a Workflow has a value that also has a default value set in the config map, the Workflow's value will take precedence.

-

Setting Default Workflow Values

-

Default Workflow values can be specified by adding them under the workflowDefaults key in the workflow-controller-configmap. -Values can be added as they would under the Workflow.spec tag.

-

For example, to specify default values that would partially produce the following Workflow:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: gc-ttl-
-  annotations:
-    argo: workflows
-  labels:
-    foo: bar
-spec:
-  ttlStrategy:
-    secondsAfterSuccess: 5     # Time to live after workflow is successful
-  parallelism: 3
-
-

The following would be specified in the Config Map:

-
# This file describes the config settings available in the workflow controller configmap
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:
-  # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level
-  workflowDefaults: |
-    metadata:
-      annotations:
-        argo: workflows
-      labels:
-        foo: bar
-    spec:
-      ttlStrategy:
-        secondsAfterSuccess: 5
-      parallelism: 3
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/default-workflow-specs/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/disaster-recovery/index.html b/disaster-recovery/index.html index 6f940a354306..02ab2c549095 100644 --- a/disaster-recovery/index.html +++ b/disaster-recovery/index.html @@ -1,3920 +1,11 @@ - - - + - - - - - - - - - - - - Disaster Recovery (DR) - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Disaster Recovery (DR) - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Disaster Recovery (DR)

-

We only store data in your Kubernetes cluster. You should consider backing this up regularly.

-

Exporting example:

-
kubectl get wf,cwf,cwft,wftmpl -A -o yaml > backup.yaml
-
-

Importing example:

-
kubectl apply -f backup.yaml 
-
-

You should also back-up any SQL persistence you use regularly with whatever tool is provided with it.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/disaster-recovery/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/doc-changes/index.html b/doc-changes/index.html index 69053f0484d4..9aa76341e6e7 100644 --- a/doc-changes/index.html +++ b/doc-changes/index.html @@ -1,4003 +1,11 @@ - - - + - - - - - - - - - - - - Documentation Changes - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Documentation Changes - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Documentation Changes

-

Docs help our customers understand how to use workflows and fix their own problems.

-

Doc changes are checked for spelling, broken links, and lint issues by CI. To check locally, run make docs.

-

General guidelines:

-
    -
  • Explain when you would want to use a feature.
  • -
  • Provide working examples.
  • -
  • Format code using back-ticks to avoid it being reported as a spelling error.
  • -
  • Prefer 1 sentence per line of markdown
  • -
  • Follow the recommendations in the official Kubernetes Documentation Style Guide.
      -
    • Particularly useful sections include Content best practices and Patterns to avoid.
    • -
    • Note: Argo does not use the same tooling, so the sections on "shortcodes" and "EditorConfig" are not relevant.
    • -
    -
  • -
-

Running Locally

-

To test/run locally:

-
make docs-serve
-
-

Tips

-

Use a service like Grammarly to check your grammar.

-

Having your computer read text out loud is a way to catch problems, e.g.:

-
    -
  • Word substitutions (i.e. the wrong word is used, but spelled. -correctly).
  • -
  • Sentences that do not read correctly will sound wrong.
  • -
-

On Mac, to set-up:

-
    -
  • Go to System Preferences / Accessibility / Spoken Content.
  • -
  • Choose a System Voice (I like Siri Voice 1).
  • -
  • Enable Speak selection.
  • -
-

To hear text, select the text you want to hear, then press option+escape.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/doc-changes/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/empty-dir/index.html b/empty-dir/index.html index 6d2488a9b007..1694835e5161 100644 --- a/empty-dir/index.html +++ b/empty-dir/index.html @@ -1,3943 +1,11 @@ - - - + - - - - - - - - - - - - Empty Dir - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Empty Dir - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Empty Dir

-

While by default, the Docker and PNS workflow executors can get output artifacts/parameters from the base layer (e.g. /tmp), neither the Kubelet nor the K8SAPI executors can. It is unlikely you can get output artifacts/parameters from the base layer if you run your workflow pods with a security context.

-

You can work-around this constraint by mounting volumes onto your pod. The easiest way to do this is to use as emptyDir volume.

-
-

Note

-

This is only needed for output artifacts/parameters. Input artifacts/parameters are automatically mounted to an empty-dir if needed

-
-

This example shows how to mount an output volume:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: empty-dir-
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      container:
-        image: argoproj/argosay:v2
-        command: [sh, -c]
-        args: ["cowsay hello world | tee /mnt/out/hello_world.txt"]
-        volumeMounts:
-          - name: out
-            mountPath: /mnt/out
-      volumes:
-        - name: out
-          emptyDir: { }
-      outputs:
-        parameters:
-          - name: message
-            valueFrom:
-              path: /mnt/out/hello_world.txt
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/empty-dir/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/enhanced-depends-logic/index.html b/enhanced-depends-logic/index.html index 06665ccc1021..470f09e9854a 100644 --- a/enhanced-depends-logic/index.html +++ b/enhanced-depends-logic/index.html @@ -1,4070 +1,11 @@ - - - + - - - - - - - - - - - - Enhanced Depends Logic - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Enhanced Depends Logic - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Enhanced Depends Logic

-
-

v2.9 and after

-
-

Introduction

-

Previous to version 2.8, the only way to specify dependencies in DAG templates was to use the dependencies field and -specify a list of other tasks the current task depends on. This syntax was limiting because it does not allow the user to -specify which result of the task to depend on. For example, a task may only be relevant to run if the dependent task -succeeded (or failed, etc.).

-

Depends

-

To remedy this, there exists a new field called depends, which allows users to specify dependent tasks, their statuses, -as well as any complex boolean logic. The field is a string field and the syntax is expression-like with operands having -form <task-name>.<task-result>. Examples include task-1.Succeeded, task-2.Failed, task-3.Daemoned. The full list of -available task results is as follows:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Task ResultDescriptionMeaning
.SucceededTask SucceededTask finished with no error
.FailedTask FailedTask exited with a non-0 exit code
.ErroredTask ErroredTask had an error other than a non-0 exit code
.SkippedTask SkippedTask was skipped
.OmittedTask OmittedTask was omitted
.DaemonedTask is Daemoned and is not Pending
-

For convenience, if an omitted task result is equivalent to (task.Succeeded || task.Skipped || task.Daemoned).

-

For example:

-
depends: "task || task-2.Failed"
-
-

is equivalent to:

-
depends: (task.Succeeded || task.Skipped || task.Daemoned) || task-2.Failed
-
-

Full boolean logic is also available. Operators include:

-
    -
  • &&
  • -
  • ||
  • -
  • !
  • -
-

Example:

-
depends: "(task-2.Succeeded || task-2.Skipped) && !task-3.Failed"
-
-

In the case that you're depending on a task that uses withItems, you can depend on -whether any of the item tasks are successful or all have failed using .AnySucceeded and .AllFailed, for example:

-
depends: "task-1.AnySucceeded || task-2.AllFailed"
-
-

Compatibility with dependencies and dag.task.continueOn

-

This feature is fully compatible with dependencies and conversion is easy.

-

To convert simply join your dependencies with &&:

-
dependencies: ["A", "B", "C"]
-
-

is equivalent to:

-
depends: "A && B && C"
-
-

Because of the added control found in depends, the dag.task.continueOn is not available when using it. Furthermore, -it is not possible to use both dependencies and depends in the same task group.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/enhanced-depends-logic/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/environment-variables/index.html b/environment-variables/index.html index 586764f3acc1..42cb8f0ae70c 100644 --- a/environment-variables/index.html +++ b/environment-variables/index.html @@ -1,4437 +1,11 @@ - - - + - - - - - - - - - - - - Environment Variables - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Environment Variables - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Environment Variables

-

This document outlines environment variables that can be used to customize behavior.

-
-

Warning

-

Environment variables are typically added to test out experimental features and should not be used by most users. -Environment variables may be removed at any time.

-
-

Controller

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeDefaultDescription
ARGO_AGENT_TASK_WORKERSint16The number of task workers for the agent pod.
ALL_POD_CHANGES_SIGNIFICANTboolfalseWhether to consider all pod changes as significant during pod reconciliation.
ALWAYS_OFFLOAD_NODE_STATUSboolfalseWhether to always offload the node status.
ARCHIVED_WORKFLOW_GC_PERIODtime.Duration24hThe periodicity for GC of archived workflows.
ARGO_PPROFboolfalseEnable pprof endpoints
ARGO_PROGRESS_PATCH_TICK_DURATIONtime.Duration1mHow often self reported progress is patched into the pod annotations which means how long it takes until the controller picks up the progress change. Set to 0 to disable self reporting progress.
ARGO_PROGRESS_FILE_TICK_DURATIONtime.Duration3sHow often the progress file is read by the executor. Set to 0 to disable self reporting progress.
ARGO_REMOVE_PVC_PROTECTION_FINALIZERbooltrueRemove the kubernetes.io/pvc-protection finalizer from persistent volume claims (PVC) after marking PVCs created for the workflow for deletion, so deleted is not blocked until the pods are deleted. #6629
ARGO_TRACEstring``Whether to enable tracing statements in Argo components.
ARGO_AGENT_PATCH_RATEtime.DurationDEFAULT_REQUEUE_TIMERate that the Argo Agent will patch the workflow task-set.
ARGO_AGENT_CPU_LIMITresource.Quantity100mCPU resource limit for the agent.
ARGO_AGENT_MEMORY_LIMITresource.Quantity256mMemory resource limit for the agent.
BUBBLE_ENTRY_TEMPLATE_ERRbooltrueWhether to bubble up template errors to workflow.
CACHE_GC_PERIODtime.Duration0sHow often to perform memoization cache GC, which is disabled by default and can be enabled by providing a non-zero duration.
CACHE_GC_AFTER_NOT_HIT_DURATIONtime.Duration30sWhen a memoization cache has not been hit after this duration, it will be deleted.
CRON_SYNC_PERIODtime.Duration10sHow often to sync cron workflows.
DEFAULT_REQUEUE_TIMEtime.Duration10sThe re-queue time for the rate limiter of the workflow queue.
DISABLE_MAX_RECURSIONboolfalseSet to true to disable the recursion preventer, which will stop a workflow running which has called into a child template 100 times
EXPRESSION_TEMPLATESbooltrueEscape hatch to disable expression templates.
EVENT_AGGREGATION_WITH_ANNOTATIONSboolfalseWhether event annotations will be used when aggregating events.
GZIP_IMPLEMENTATIONstringPGZipThe implementation of compression/decompression. Currently only "PGZip" and "GZip" are supported.
INFORMER_WRITE_BACKbooltrueWhether to write back to informer instead of catching up.
HEALTHZ_AGEtime.Duration5mHow old a un-reconciled workflow is to report unhealthy.
INDEX_WORKFLOW_SEMAPHORE_KEYSbooltrueWhether or not to index semaphores.
LEADER_ELECTION_IDENTITYstringController's metadata.nameThe ID used for workflow controllers to elect a leader.
LEADER_ELECTION_DISABLEboolfalseWhether leader election should be disabled.
LEADER_ELECTION_LEASE_DURATIONtime.Duration15sThe duration that non-leader candidates will wait to force acquire leadership.
LEADER_ELECTION_RENEW_DEADLINEtime.Duration10sThe duration that the acting master will retry refreshing leadership before giving up.
LEADER_ELECTION_RETRY_PERIODtime.Duration5sThe duration that the leader election clients should wait between tries of actions.
MAX_OPERATION_TIMEtime.Duration30sThe maximum time a workflow operation is allowed to run for before re-queuing the workflow onto the work queue.
OFFLOAD_NODE_STATUS_TTLtime.Duration5mThe TTL to delete the offloaded node status. Currently only used for testing.
OPERATION_DURATION_METRIC_BUCKET_COUNTint6The number of buckets to collect the metric for the operation duration.
POD_NAMESstringv2Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Argo Server.
RECENTLY_STARTED_POD_DURATIONtime.Duration10sThe duration of a pod before the pod is considered to be recently started.
RETRY_BACKOFF_DURATIONtime.Duration10msThe retry back-off duration when retrying API calls.
RETRY_BACKOFF_FACTORfloat2.0The retry back-off factor when retrying API calls.
RETRY_BACKOFF_STEPSint5The retry back-off steps when retrying API calls.
RETRY_HOST_NAME_LABEL_KEYstringkubernetes.io/hostnameThe label key for host name used when retrying templates.
TRANSIENT_ERROR_PATTERNstring""The regular expression that represents additional patterns for transient errors.
WF_DEL_PROPAGATION_POLICYstring""The deletion propagation policy for workflows.
WORKFLOW_GC_PERIODtime.Duration5mThe periodicity for GC of workflows.
SEMAPHORE_NOTIFY_DELAYtime.Duration1sTuning Delay when notifying semaphore waiters about availability in the semaphore
-

CLI parameters of the Controller can be specified as environment variables with the ARGO_ prefix. -For example:

-
workflow-controller --managed-namespace=argo
-
-

Can be expressed as:

-
ARGO_MANAGED_NAMESPACE=argo workflow-controller
-
-

You can set environment variables for the Controller Deployment's container spec like the following:

-
apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: workflow-controller
-spec:
-  selector:
-    matchLabels:
-      app: workflow-controller
-  template:
-    metadata:
-      labels:
-        app: workflow-controller
-    spec:
-      containers:
-        - env:
-            - name: WORKFLOW_GC_PERIOD
-              value: 30s
-
-

Executor

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeDefaultDescription
EXECUTOR_RETRY_BACKOFF_DURATIONtime.Duration1sThe retry back-off duration when the workflow executor performs retries.
EXECUTOR_RETRY_BACKOFF_FACTORfloat1.6The retry back-off factor when the workflow executor performs retries.
EXECUTOR_RETRY_BACKOFF_JITTERfloat0.5The retry back-off jitter when the workflow executor performs retries.
EXECUTOR_RETRY_BACKOFF_STEPSint5The retry back-off steps when the workflow executor performs retries.
REMOVE_LOCAL_ART_PATHboolfalseWhether to remove local artifacts.
RESOURCE_STATE_CHECK_INTERVALtime.Duration5sThe time interval between resource status checks against the specified success and failure conditions.
WAIT_CONTAINER_STATUS_CHECK_INTERVALtime.Duration5sThe time interval for wait container to check whether the containers have completed.
-

You can set environment variables for the Executor in your workflow-controller-configmap like the following:

-
apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:
-  config: |
-    executor:
-      env:
-      - name: RESOURCE_STATE_CHECK_INTERVAL
-        value: 3s
-
-

Argo Server

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeDefaultDescription
DISABLE_VALUE_LIST_RETRIEVAL_KEY_PATTERNstring""Disable the retrieval of the list of label values for keys based on this regular expression.
FIRST_TIME_USER_MODALbooltrueShow this modal.
FEEDBACK_MODALbooltrueShow this modal.
IP_KEY_FUNC_HEADERSstring""List of comma separated request headers containing IPs to use for rate limiting. For example, "X-Forwarded-For,X-Real-IP". By default, uses the request's remote IP address.
NEW_VERSION_MODALbooltrueShow this modal.
POD_NAMESstringv2Whether to have pod names contain the template name (v2) or be the node id (v1) - should be set the same for Controller
GRPC_MESSAGE_SIZEstring104857600Use different GRPC Max message size for Server (supporting huge workflows).
-

CLI parameters of the Server can be specified as environment variables with the ARGO_ prefix. -For example:

-
argo server --managed-namespace=argo
-
-

Can be expressed as:

-
ARGO_MANAGED_NAMESPACE=argo argo server
-
-

You can set environment variables for the Server Deployment's container spec like the following:

-
apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: argo-server
-spec:
-  selector:
-    matchLabels:
-      app: argo-server
-  template:
-    metadata:
-      labels:
-        app: argo-server
-    spec:
-      containers:
-        - args:
-            - server
-          image: argoproj/argocli:latest
-          name: argo-server
-          env:
-            - name: GRPC_MESSAGE_SIZE
-              value: "209715200"
-          ports:
-          # ...
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/environment-variables/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/estimated-duration/index.html b/estimated-duration/index.html index 0ab8a60b82e8..80dac9a72e40 100644 --- a/estimated-duration/index.html +++ b/estimated-duration/index.html @@ -1,3926 +1,11 @@ - - - + - - - - - - - - - - - - Estimated Duration - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Estimated Duration - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Estimated Duration

-
-

v2.12 and after

-
-

When you run a workflow, the controller will try to estimate its duration.

-

This is based on the most recently successful workflow submitted from the same workflow template, cluster workflow template or cron workflow.

-

To get this data, the controller queries the Kubernetes API first (as this is faster) and then workflow archive (if enabled).

-

If you've used tools like Jenkins, you'll know that that estimates can be inaccurate:

-
    -
  • A pod spent a long amount of time pending scheduling.
  • -
  • The workflow is non-deterministic, e.g. it uses when to execute different paths.
  • -
  • The workflow can vary is scale, e.g. sometimes it uses withItems and so sometimes run 100 nodes, sometimes a 1000.
  • -
  • If the pod runtimes are unpredictable.
  • -
  • The workflow is parametrized, and different parameters affect its duration.
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/estimated-duration/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/events/index.html b/events/index.html index 2702a326906a..429f9c35b916 100644 --- a/events/index.html +++ b/events/index.html @@ -1,4325 +1,11 @@ - - - + - - - - - - - - - - - - Events - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Events - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Events

-
-

v2.11 and after

-
-

Overview

-

To support external webhooks, we have this endpoint /api/v1/events/{namespace}/{discriminator}. Events sent to that can be any JSON data.

-

These events can submit workflow templates or cluster workflow templates.

-

You may also wish to read about webhooks.

-

Authentication and Security

-

Clients wanting to send events to the endpoint need an access token.

-

It is only possible to submit workflow templates your access token has access to: example role.

-

Example (note the trailing slash):

-
curl https://localhost:2746/api/v1/events/argo/ \
-  -H "Authorization: $ARGO_TOKEN" \
-  -d '{"message": "hello"}'
-
-

With a discriminator:

-
curl https://localhost:2746/api/v1/events/argo/my-discriminator \
-  -H "Authorization: $ARGO_TOKEN" \
-  -d '{"message": "hello"}'
-
-

The event endpoint will always return in under 10 seconds because the event will be queued and processed asynchronously. This means you will not be notified synchronously of failure. It will return a failure (503) if the event processing queue is full.

-
-

Processing Order

-

Events may not always be processed in the order they are received.

-
-

Workflow Template triggered by the event

-

Before the binding between an event and a workflow template, you must create the workflow template that you want to trigger. -The following one takes in input the "message" parameter specified into the API call body, passed through the WorkflowEventBinding parameters section, and finally resolved here as the message of the whalesay image.

-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: my-wf-tmple
-  namespace: argo
-spec:
-  templates:
-    - name: main
-      inputs:
-        parameters:
-          - name: message
-            value: "{{workflow.parameters.message}}"
-      container:
-        image: docker/whalesay:latest
-        command: [cowsay]
-        args: ["{{inputs.parameters.message}}"]
-  entrypoint: main
-
-

Submitting A Workflow From A Workflow Template

-

A workflow template will be submitted (i.e. workflow created from it) and that can be created using parameters from the event itself. -The following example will be triggered by an event with "message" in the payload. That message will be used as an argument for the created workflow. Note that the name of the meta-data header "x-argo-e2e" is lowercase in the selector to match. Incoming header names are converted to lowercase.

-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowEventBinding
-metadata:
-  name: event-consumer
-spec:
-  event:
-    # metadata header name must be lowercase to match in selector
-    selector: payload.message != "" && metadata["x-argo-e2e"] == ["true"] && discriminator == "my-discriminator"
-  submit:
-    workflowTemplateRef:
-      name: my-wf-tmple
-    arguments:
-      parameters:
-      - name: message
-        valueFrom:
-          event: payload.message
-
-

Please, notice that workflowTemplateRef refers to a template with the name my-wf-tmple, this template has to be created before the triggering of the event. -After that you have to apply the above explained WorkflowEventBinding (in this example this is called event-template.yml) to realize the binding between Workflow Template and event (you can use kubectl to do that):

-
kubectl apply -f event-template.yml
-
-

Finally you can trigger the creation of your first parametrized workflow template, by using the following call:

-

Event:

-
curl $ARGO_SERVER/api/v1/events/argo/my-discriminator \
-    -H "Authorization: $ARGO_TOKEN" \
-    -H "X-Argo-E2E: true" \
-    -d '{"message": "hello events"}'
-
-
-

Malformed Expressions

-

If the expression is malformed, this is logged. It is not visible in logs or the UI.

-
-

Customizing the Workflow Meta-Data

-

You can customize the name of the submitted workflow as well as add annotations and -labels. This is done by adding a metadata object to the submit object.

-

Normally the name of the workflow created from an event is simply the name of the -template with a time-stamp appended. This can be customized by setting the name in the -metadata object.

-

Annotations and labels are added in the same fashion.

-

All the values for the name, annotations and labels are treated as expressions (see -below for details). The metadata object is the same metadata type as on all -Kubernetes resources and as such is parsed in the same manner. It is best to enclose -the expression in single quotes to avoid any problems when submitting the event -binding to Kubernetes.

-

This is an example snippet of how to set the name, annotations and labels. This is -based on the workflow binding from above, and the first event.

-
submit:
-  metadata:
-    annotations:
-      anAnnotation: 'event.payload.message'
-    name: 'event.payload.message + "-world"'
-    labels:
-      someLabel: '"literal string"'
-
-

This will result in the workflow being named "hello-world" instead of -my-wf-tmple-<timestamp>. There will be an extra label with the key someLabel and -a value of "literal string". There will also be an extra annotation with the key -anAnnotation and a value of "hello"

-

Be careful when setting the name. If the name expression evaluates to that of a currently -existing workflow, the new workflow will fail to submit.

-

The name, annotation and label expression must evaluate to a string and follow the normal Kubernetes naming -requirements.

-

Event Expression Syntax and the Event Expression Environment

-

Event expressions are expressions that are evaluated over the event expression environment.

-

Expression Syntax

-

Because the endpoint accepts any JSON data, it is the user's responsibility to write a suitable expression to correctly filter the events they are interested in. Therefore, DO NOT assume the existence of any fields, and guard against them using a nil check.

-

Learn more about expression syntax.

-

Expression Environment

-

The event environment contains:

-
    -
  • payload the event payload.
  • -
  • metadata event meta-data, including HTTP headers.
  • -
  • discriminator the discriminator from the URL.
  • -
-

Payload

-

This is the JSON payload of the event.

-

Example:

-
payload.repository.clone_url == "http://gihub.com/argoproj/argo"
-
-

Meta-Data

-

Meta-data is data about the event, this includes headers:

-

Headers

-

HTTP header names are lowercase and only include those that have x- as their prefix. Their values are lists, not single values.

-
    -
  • Wrong: metadata["X-Github-Event"] == "push"
  • -
  • Wrong: metadata["x-github-event"] == "push"
  • -
  • Wrong: metadata["X-Github-Event"] == ["push"]
  • -
  • Wrong: metadata["github-event"] == ["push"]
  • -
  • Wrong: metadata["authorization"] == ["push"]
  • -
  • Right: metadata["x-github-event"] == ["push"]
  • -
-

Example:

-
metadata["x-argo"] == ["yes"]
-
-

Discriminator

-

This is only for edge-cases where neither the payload, or meta-data provide enough information to discriminate. Typically, it should be empty and ignored.

-

Example:

-
discriminator == "my-discriminator"
-
-

High-Availability

-
-

Run Minimum 2 Replicas

-

You MUST run a minimum of two Argo Server replicas if you do not want to lose events.

-
-

If you are processing large numbers of events, you may need to scale up the Argo Server to handle them. By default, a single Argo Server can be processing 64 events before the endpoint will start returning 503 errors.

-

Vertically you can:

-
    -
  • Increase the size of the event operation queue --event-operation-queue-size (good for temporary event bursts).
  • -
  • Increase the number of workers --event-worker-count (good for sustained numbers of events).
  • -
-

Horizontally you can:

-
    -
  • Run more Argo Servers (good for sustained numbers of events AND high-availability).
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/events/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/executor_plugins/index.html b/executor_plugins/index.html index 2015f94fc03e..ae0e917b465b 100644 --- a/executor_plugins/index.html +++ b/executor_plugins/index.html @@ -1,4368 +1,11 @@ - - - + - - - - - - - - - - - - Executor Plugins - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Executor Plugins - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
- -
-
- - -
-
- - - - - - - - -

Executor Plugins

-
-

Since v3.3

-
-

Configuration

-

Plugins are disabled by default. To enable them, start the controller with ARGO_EXECUTOR_PLUGINS=true, e.g.

-
apiVersion: apps/v1
-kind: Deployment
-metadata:
-  name: workflow-controller
-spec:
-  template:
-    spec:
-      containers:
-        - name: workflow-controller
-          env:
-            - name: ARGO_EXECUTOR_PLUGINS
-              value: "true"
-
-

When using the Helm chart, add this to your values.yaml:

-
controller:
-  extraEnv:
-    - name: ARGO_EXECUTOR_PLUGINS
-      value: "true"
-
-

Template Executor

-

This is a plugin that runs custom "plugin" templates, e.g. for non-pod tasks such as Tekton builds, Spark jobs, sending -Slack notifications.

-

A Simple Python Plugin

-

Let's make a Python plugin that prints "hello" each time the workflow is operated on.

-

We need the following:

-
    -
  1. Plugins enabled (see above).
  2. -
  3. A HTTP server that will be run as a sidecar to the main container and will respond to RPC HTTP requests from the - executor with this API contract.
  4. -
  5. A plugin.yaml configuration file, that is turned into a config map so the controller can discover the plugin.
  6. -
-

A template executor plugin services HTTP POST requests on /api/v1/template.execute:

-
curl http://localhost:4355/api/v1/template.execute -d \
-'{
-  "workflow": {
-    "metadata": {
-      "name": "my-wf"
-    }
-  },
-  "template": {
-    "name": "my-tmpl",
-    "inputs": {},
-    "outputs": {},
-    "plugin": {
-      "hello": {}
-    }
-  }
-}'
-# ...
-HTTP/1.1 200 OK
-{
-  "node": {
-    "phase": "Succeeded",
-    "message": "Hello template!"
-  }
-}
-
-

Tip: The port number can be anything, but must not conflict with other plugins. Don't use common ports such as 80, -443, 8080, 8081, 8443. If you plan to publish your plugin, choose a random port number under 10,000 and create a PR to -add your plugin. If not, use a port number greater than 10,000.

-

We'll need to create a script that starts a HTTP server. Save this as server.py:

-
import json
-from http.server import BaseHTTPRequestHandler, HTTPServer
-
-with open("/var/run/argo/token") as f:
-    token = f.read().strip()
-
-
-class Plugin(BaseHTTPRequestHandler):
-
-    def args(self):
-        return json.loads(self.rfile.read(int(self.headers.get('Content-Length'))))
-
-    def reply(self, reply):
-        self.send_response(200)
-        self.end_headers()
-        self.wfile.write(json.dumps(reply).encode("UTF-8"))
-
-    def forbidden(self):
-        self.send_response(403)
-        self.end_headers()
-
-    def unsupported(self):
-        self.send_response(404)
-        self.end_headers()
-
-    def do_POST(self):
-        if self.headers.get("Authorization") != "Bearer " + token:
-            self.forbidden()
-        elif self.path == '/api/v1/template.execute':
-            args = self.args()
-            if 'hello' in args['template'].get('plugin', {}):
-                self.reply(
-                    {'node': {'phase': 'Succeeded', 'message': 'Hello template!',
-                              'outputs': {'parameters': [{'name': 'foo', 'value': 'bar'}]}}})
-            else:
-                self.reply({})
-        else:
-            self.unsupported()
-
-
-if __name__ == '__main__':
-    httpd = HTTPServer(('', 4355), Plugin)
-    httpd.serve_forever()
-
-

Tip: Plugins can be written in any language you can run as a container. Python is convenient because you can embed -the script in the container.

-

Some things to note here:

-
    -
  • You only need to implement the calls you need. Return 404 and it won't be called again.
  • -
  • The path is the RPC method name.
  • -
  • You should check that the Authorization header contains the same value as /var/run/argo/token. Return 403 if not
  • -
  • The request body contains the template's input parameters.
  • -
  • The response body may contain the node's result, including the phase (e.g. "Succeeded" or "Failed") and a message.
  • -
  • If the response is {}, then the plugin is saying it cannot execute the plugin template, e.g. it is a Slack plugin, - but the template is a Tekton job.
  • -
  • If the status code is 404, then the plugin will not be called again.
  • -
  • If you save the file as server.*, it will be copied to the sidecar container's args field. This is useful for building self-contained plugins in scripting languages like Python or Node.JS.
  • -
-

Next, create a manifest named plugin.yaml:

-
apiVersion: argoproj.io/v1alpha1
-kind: ExecutorPlugin
-metadata:
-  name: hello
-spec:
-  sidecar:
-    container:
-      command:
-        - python
-        - -u # disables output buffering
-        - -c
-      image: python:alpine3.6
-      name: hello-executor-plugin
-      ports:
-        - containerPort: 4355
-      securityContext:
-        runAsNonRoot: true
-        runAsUser: 65534 # nobody
-      resources:
-        requests:
-          memory: "64Mi"
-          cpu: "250m"
-        limits:
-          memory: "128Mi"
-          cpu: "500m"
-
-

Build and install as follows:

-
argo executor-plugin build .
-kubectl -n argo apply -f hello-executor-plugin-configmap.yaml
-
-

Check your controller logs:

-
level=info msg="Executor plugin added" name=hello-controller-plugin
-
-

Run this workflow.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: hello-
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      plugin:
-        hello: { }
-
-

You'll see the workflow complete successfully.

-

Discovery

-

When a workflow is run, plugins are loaded from:

-
    -
  • The workflow's namespace.
  • -
  • The Argo installation namespace (typically argo).
  • -
-

If two plugins have the same name, only the one in the workflow's namespace is loaded.

-

Secrets

-

If you interact with a third-party system, you'll need access to secrets. Don't put them in plugin.yaml. Use a secret:

-
spec:
-  sidecar:
-    container:
-      env:
-        - name: URL
-          valueFrom:
-            secretKeyRef:
-              name: slack-executor-plugin
-              key: URL
-
-

Refer to the Kubernetes Secret documentation for secret best practices and security considerations.

-

Resources, Security Context

-

We made these mandatory, so no one can create a plugin that uses an unreasonable amount of memory, or run as root unless -they deliberately do so:

-
spec:
-  sidecar:
-    container:
-      resources:
-        requests:
-          cpu: 100m
-          memory: 32Mi
-        limits:
-          cpu: 200m
-          memory: 64Mi
-      securityContext:
-        runAsNonRoot: true
-        runAsUser: 1000
-
-

Failure

-

A plugin may fail as follows:

-
    -
  • Connection/socket error - considered transient.
  • -
  • Timeout - considered transient.
  • -
  • 404 error - method is not supported by the plugin, as a result the method will not be called again (in the same workflow).
  • -
  • 503 error - considered transient.
  • -
  • Other 4xx/5xx errors - considered fatal.
  • -
-

Transient errors are retried, all other errors are considered fatal.

-

Fatal errors will result in failed steps.

-

Re-Queue

-

It might be the case that the plugin can't finish straight away. E.g. it starts a long running task. When that happens, -you return "Pending" or "Running" a and a re-queue time:

-
{
-  "node": {
-    "phase": "Running",
-    "message": "Long-running task started"
-  },
-  "requeue": "2m"
-}
-
-

In this example, the task will be re-queued and template.execute will be called again in 2 minutes.

-

Debugging

-

You can find the plugin's log in the agent pod's sidecar, e.g.:

-
kubectl -n argo logs ${agentPodName} -c hello-executor-plugin
-
-

Listing Plugins

-

Because plugins are just config maps, you can list them using kubectl:

-
kubectl get cm -l workflows.argoproj.io/configmap-type=ExecutorPlugin
-
-

Examples and Community Contributed Plugins

-

Plugin directory

-

Publishing Your Plugin

-

If you want to publish and share you plugin (we hope you do!), then submit a pull request to add it to the above -directory.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/executor_plugins/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/executor_swagger/index.html b/executor_swagger/index.html index 04e72ce42ae5..828b32577e7a 100644 --- a/executor_swagger/index.html +++ b/executor_swagger/index.html @@ -1,25967 +1,11 @@ - - - + - - - - - - - - - - - - The API for an executor plugin. - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + The API for an executor plugin. - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

The API for an executor plugin.

-

Informations

-

Version

-

0.0.1

-

Content negotiation

-

URI Schemes

-
    -
  • http
  • -
-

Consumes

-
    -
  • application/json
  • -
-

Produces

-
    -
  • application/json
  • -
-

All endpoints

-

operations

- - - - - - - - - - - - - - - - - -
MethodURINameSummary
POST/api/v1/template.executeexecute template
-

Paths

-

execute template (executeTemplate)

-
POST /api/v1/template.execute
-
-

Parameters

- - - - - - - - - - - - - - - - - - - - - - - - - -
NameSourceTypeGo typeSeparatorRequiredDefaultDescription
BodybodyExecuteTemplateArgsmodels.ExecuteTemplateArgs
-

All responses

- - - - - - - - - - - - - - - - - - - -
CodeStatusDescriptionHas headersSchema
200OKschema
-

Responses

-
200
-

Status: OK

-
Schema
-

ExecuteTemplateReply

-

Models

-

AWSElasticBlockStoreVolumeSource

-
-

An AWS EBS disk must exist before mounting to a container. The disk -must also be in the same AWS zone as the kubelet. An AWS EBS disk -can only be mounted as read/write once. AWS EBS volumes support -ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfsType is the filesystem type of the volume that you want to mount.
Tip: Ensure that the filesystem type is supported by the host operating system.
Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
TODO: how do we prevent errors in the filesystem from compromising the machine
+optional
partitionint32 (formatted integer)int32partition is the partition in the volume that you want to mount.
If omitted, the default is to mount by volume name.
Examples: For volume /dev/sda1, you specify the partition as "1".
Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty).
+optional
readOnlybooleanboolreadOnly value true will force the readOnly setting in VolumeMounts.
More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
+optional
volumeIDstringstringvolumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume).
More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
-

Affinity

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
nodeAffinityNodeAffinityNodeAffinity
podAffinityPodAffinityPodAffinity
podAntiAffinityPodAntiAffinityPodAntiAffinity
-

Amount

-
-

+kubebuilder:validation:Type=number

-
-

interface{}

-

AnyString

-
-

It will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string. -It will marshall back to string - marshalling is not symmetric.

-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
AnyStringstringstringIt will unmarshall int64, int32, float64, float32, boolean, a plain string and represents it as string.
It will marshall back to string - marshalling is not symmetric.
-

ArchiveStrategy

-
-

ArchiveStrategy describes how to archive files/directory when saving artifacts

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
noneNoneStrategyNoneStrategy
tarTarStrategyTarStrategy
zipZipStrategyZipStrategy
-

Arguments

-
-

Arguments to a template

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
artifactsArtifactsArtifacts
parameters[]Parameter[]*ParameterParameters is the list of parameters to pass to the template or workflow
+patchStrategy=merge
+patchMergeKey=name
-

Artifact

-
-

Artifact indicates an artifact to place at a specified path

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
archiveArchiveStrategyArchiveStrategy
archiveLogsbooleanboolArchiveLogs indicates if the container logs should be archived
artifactGCArtifactGCArtifactGC
artifactoryArtifactoryArtifactArtifactoryArtifact
azureAzureArtifactAzureArtifact
deletedbooleanboolHas this been deleted?
fromstringstringFrom allows an artifact to reference an artifact from a previous step
fromExpressionstringstringFromExpression, if defined, is evaluated to specify the value for the artifact
gcsGCSArtifactGCSArtifact
gitGitArtifactGitArtifact
globalNamestringstringGlobalName exports an output artifact to the global scope, making it available as
'{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts
hdfsHDFSArtifactHDFSArtifact
httpHTTPArtifactHTTPArtifact
modeint32 (formatted integer)int32mode bits to use on this file, must be a value between 0 and 0777
set when loading input artifacts.
namestringstringname of the artifact. must be unique within a template's inputs/outputs.
optionalbooleanboolMake Artifacts optional, if Artifacts doesn't generate or exist
ossOSSArtifactOSSArtifact
pathstringstringPath is the container path to the artifact
rawRawArtifactRawArtifact
recurseModebooleanboolIf mode is set, apply the permission recursively into the artifact if it is a folder
s3S3ArtifactS3Artifact
subPathstringstringSubPath allows an artifact to be sourced from a subpath within the specified source
-

ArtifactGC

-
-

ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
podMetadataMetadataMetadata
serviceAccountNamestringstringServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion
strategyArtifactGCStrategyArtifactGCStrategy
-

ArtifactGCStrategy

- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
ArtifactGCStrategystringstring
-

ArtifactLocation

-
-

It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). -It is also used to describe the location of multiple artifacts such as the archive location -of a single workflow step, which the executor will use as a default location to store its files.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
archiveLogsbooleanboolArchiveLogs indicates if the container logs should be archived
artifactoryArtifactoryArtifactArtifactoryArtifact
azureAzureArtifactAzureArtifact
gcsGCSArtifactGCSArtifact
gitGitArtifactGitArtifact
hdfsHDFSArtifactHDFSArtifact
httpHTTPArtifactHTTPArtifact
ossOSSArtifactOSSArtifact
rawRawArtifactRawArtifact
s3S3ArtifactS3Artifact
-

ArtifactPaths

-
-

ArtifactPaths expands a step from a collection of artifacts

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
archiveArchiveStrategyArchiveStrategy
archiveLogsbooleanboolArchiveLogs indicates if the container logs should be archived
artifactGCArtifactGCArtifactGC
artifactoryArtifactoryArtifactArtifactoryArtifact
azureAzureArtifactAzureArtifact
deletedbooleanboolHas this been deleted?
fromstringstringFrom allows an artifact to reference an artifact from a previous step
fromExpressionstringstringFromExpression, if defined, is evaluated to specify the value for the artifact
gcsGCSArtifactGCSArtifact
gitGitArtifactGitArtifact
globalNamestringstringGlobalName exports an output artifact to the global scope, making it available as
'{{workflow.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts
hdfsHDFSArtifactHDFSArtifact
httpHTTPArtifactHTTPArtifact
modeint32 (formatted integer)int32mode bits to use on this file, must be a value between 0 and 0777
set when loading input artifacts.
namestringstringname of the artifact. must be unique within a template's inputs/outputs.
optionalbooleanboolMake Artifacts optional, if Artifacts doesn't generate or exist
ossOSSArtifactOSSArtifact
pathstringstringPath is the container path to the artifact
rawRawArtifactRawArtifact
recurseModebooleanboolIf mode is set, apply the permission recursively into the artifact if it is a folder
s3S3ArtifactS3Artifact
subPathstringstringSubPath allows an artifact to be sourced from a subpath within the specified source
-

ArtifactoryArtifact

-
-

ArtifactoryArtifact is the location of an artifactory artifact

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
passwordSecretSecretKeySelectorSecretKeySelector
urlstringstringURL of the artifact
usernameSecretSecretKeySelectorSecretKeySelector
-

Artifacts

-

[]Artifact

-

AzureArtifact

-
-

AzureArtifact is the location of a an Azure Storage artifact

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
accountKeySecretSecretKeySelectorSecretKeySelector
blobstringstringBlob is the blob name (i.e., path) in the container where the artifact resides
containerstringstringContainer is the container where resources will be stored
endpointstringstringEndpoint is the service url associated with an account. It is most likely "https://.blob.core.windows.net"
useSDKCredsbooleanboolUseSDKCreds tells the driver to figure out credentials based on sdk defaults.
-

AzureDataDiskCachingMode

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
AzureDataDiskCachingModestringstring+enum
-

AzureDataDiskKind

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
AzureDataDiskKindstringstring+enum
-

AzureDiskVolumeSource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
cachingModeAzureDataDiskCachingModeAzureDataDiskCachingMode
diskNamestringstringdiskName is the Name of the data disk in the blob storage
diskURIstringstringdiskURI is the URI of data disk in the blob storage
fsTypestringstringfsType is Filesystem type to mount.
Must be a filesystem type supported by the host operating system.
Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
+optional
kindAzureDataDiskKindAzureDataDiskKind
readOnlybooleanboolreadOnly Defaults to false (read/write). ReadOnly here will force
the ReadOnly setting in VolumeMounts.
+optional
-

AzureFileVolumeSource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
readOnlybooleanboolreadOnly defaults to false (read/write). ReadOnly here will force
the ReadOnly setting in VolumeMounts.
+optional
secretNamestringstringsecretName is the name of secret that contains Azure Storage Account Name and Key
shareNamestringstringshareName is the azure share Name
-

Backoff

-
-

Backoff is a backoff strategy to use within retryStrategy

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
durationstringstringDuration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
factorIntOrStringIntOrString
maxDurationstringstringMaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy
-

BasicAuth

-
-

BasicAuth describes the secret selectors required for basic authentication

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
passwordSecretSecretKeySelectorSecretKeySelector
usernameSecretSecretKeySelectorSecretKeySelector
-

CSIVolumeSource

-
-

Represents a source location of a volume to mount, managed by an external CSI driver

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
driverstringstringdriver is the name of the CSI driver that handles this volume.
Consult with your admin for the correct name as registered in the cluster.
fsTypestringstringfsType to mount. Ex. "ext4", "xfs", "ntfs".
If not provided, the empty value is passed to the associated CSI driver
which will determine the default filesystem to apply.
+optional
nodePublishSecretRefLocalObjectReferenceLocalObjectReference
readOnlybooleanboolreadOnly specifies a read-only configuration for the volume.
Defaults to false (read/write).
+optional
volumeAttributesmap of stringmap[string]stringvolumeAttributes stores driver-specific properties that are passed to the CSI
driver. Consult your driver's documentation for supported values.
+optional
-

Cache

-
-

Cache is the configuration for the type of cache to be used

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
configMapConfigMapKeySelectorConfigMapKeySelector
-

Capabilities

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
add[]Capability[]CapabilityAdded capabilities
+optional
drop[]Capability[]CapabilityRemoved capabilities
+optional
-

Capability

-
-

Capability represent POSIX capabilities type

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
CapabilitystringstringCapability represent POSIX capabilities type
-

CephFSVolumeSource

-
-

Represents a Ceph Filesystem mount that lasts the lifetime of a pod -Cephfs volumes do not support ownership management or SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
monitors[]string[]stringmonitors is Required: Monitors is a collection of Ceph monitors
More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
pathstringstringpath is Optional: Used as the mounted root, rather than the full Ceph tree, default is /
+optional
readOnlybooleanboolreadOnly is Optional: Defaults to false (read/write). ReadOnly here will force
the ReadOnly setting in VolumeMounts.
More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
+optional
secretFilestringstringsecretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret
More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
+optional
secretRefLocalObjectReferenceLocalObjectReference
userstringstringuser is optional: User is the rados user name, default is admin
More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
+optional
-

CinderVolumeSource

-
-

A Cinder volume must exist before mounting to a container. -The volume must also be in the same region as the kubelet. -Cinder volumes support ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfsType is the filesystem type to mount.
Must be a filesystem type supported by the host operating system.
Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
More info: https://examples.k8s.io/mysql-cinder-pd/README.md
+optional
readOnlybooleanboolreadOnly defaults to false (read/write). ReadOnly here will force
the ReadOnly setting in VolumeMounts.
More info: https://examples.k8s.io/mysql-cinder-pd/README.md
+optional
secretRefLocalObjectReferenceLocalObjectReference
volumeIDstringstringvolumeID used to identify the volume in cinder.
More info: https://examples.k8s.io/mysql-cinder-pd/README.md
-

ClientCertAuth

-
-

ClientCertAuth holds necessary information for client authentication via certificates

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
clientCertSecretSecretKeySelectorSecretKeySelector
clientKeySecretSecretKeySelectorSecretKeySelector
-

ConfigMapEnvSource

-
-

The contents of the target ConfigMap's Data field will represent the -key-value pairs as environment variables.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
namestringstringName of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?
+optional
optionalbooleanboolSpecify whether the ConfigMap must be defined
+optional
-

ConfigMapKeySelector

-
-

+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
keystringstringThe key to select.
namestringstringName of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?
+optional
optionalbooleanboolSpecify whether the ConfigMap or its key must be defined
+optional
-

ConfigMapProjection

-
-

The contents of the target ConfigMap's Data field will be presented in a -projected volume as files using the keys in the Data field as the file names, -unless the items element is populated with specific mappings of keys to paths. -Note that this is identical to a configmap volume source without the default -mode.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
items[]KeyToPath[]*KeyToPathitems if unspecified, each key-value pair in the Data field of the referenced
ConfigMap will be projected into the volume as a file whose name is the
key and content is the value. If specified, the listed keys will be
projected into the specified paths, and unlisted keys will not be
present. If a key is specified which is not present in the ConfigMap,
the volume setup will error unless it is marked optional. Paths must be
relative and may not contain the '..' path or start with '..'.
+optional
namestringstringName of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?
+optional
optionalbooleanbooloptional specify whether the ConfigMap or its keys must be defined
+optional
-

ConfigMapVolumeSource

-
-

The contents of the target ConfigMap's Data field will be presented in a -volume as files using the keys in the Data field as the file names, unless -the items element is populated with specific mappings of keys to paths. -ConfigMap volumes support ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
defaultModeint32 (formatted integer)int32defaultMode is optional: mode bits used to set permissions on created files by default.
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values for mode bits.
Defaults to 0644.
Directories within the path are not affected by this setting.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
+optional
items[]KeyToPath[]*KeyToPathitems if unspecified, each key-value pair in the Data field of the referenced
ConfigMap will be projected into the volume as a file whose name is the
key and content is the value. If specified, the listed keys will be
projected into the specified paths, and unlisted keys will not be
present. If a key is specified which is not present in the ConfigMap,
the volume setup will error unless it is marked optional. Paths must be
relative and may not contain the '..' path or start with '..'.
+optional
namestringstringName of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?
+optional
optionalbooleanbooloptional specify whether the ConfigMap or its keys must be defined
+optional
-

Container

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
args[]string[]stringArguments to the entrypoint.
The container image's CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
+optional
command[]string[]stringEntrypoint array. Not executed within a shell.
The container image's ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
+optional
env[]EnvVar[]*EnvVarList of environment variables to set in the container.
Cannot be updated.
+optional
+patchMergeKey=name
+patchStrategy=merge
envFrom[]EnvFromSource[]*EnvFromSourceList of sources to populate environment variables in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.
+optional
imagestringstringContainer image name.
More info: https://kubernetes.io/docs/concepts/containers/images
This field is optional to allow higher level config management to default or override
container images in workload controllers like Deployments and StatefulSets.
+optional
imagePullPolicyPullPolicyPullPolicy
lifecycleLifecycleLifecycle
livenessProbeProbeProbe
namestringstringName of the container specified as a DNS_LABEL.
Each container in a pod must have a unique name (DNS_LABEL).
Cannot be updated.
ports[]ContainerPort[]*ContainerPortList of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network.
Cannot be updated.
+optional
+patchMergeKey=containerPort
+patchStrategy=merge
+listType=map
+listMapKey=containerPort
+listMapKey=protocol
readinessProbeProbeProbe
resourcesResourceRequirementsResourceRequirements
securityContextSecurityContextSecurityContext
startupProbeProbeProbe
stdinbooleanboolWhether this container should allocate a buffer for stdin in the container runtime. If this
is not set, reads from stdin in the container will always result in EOF.
Default is false.
+optional
stdinOncebooleanboolWhether the container runtime should close the stdin channel after it has been opened by
a single attach. When stdin is true the stdin stream will remain open across multiple attach
sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the
first client attaches to stdin, and then remains open and accepts data until the client disconnects,
at which time stdin is closed and remains closed until the container is restarted. If this
flag is false, a container processes that reads from stdin will never receive an EOF.
Default is false
+optional
terminationMessagePathstringstringOptional: Path at which the file to which the container's termination message
will be written is mounted into the container's filesystem.
Message written is intended to be brief final status, such as an assertion failure message.
Will be truncated by the node if greater than 4096 bytes. The total message length across
all containers will be limited to 12kb.
Defaults to /dev/termination-log.
Cannot be updated.
+optional
terminationMessagePolicyTerminationMessagePolicyTerminationMessagePolicy
ttybooleanboolWhether this container should allocate a TTY for itself, also requires 'stdin' to be true.
Default is false.
+optional
volumeDevices[]VolumeDevice[]*VolumeDevicevolumeDevices is the list of block devices to be used by the container.
+patchMergeKey=devicePath
+patchStrategy=merge
+optional
volumeMounts[]VolumeMount[]*VolumeMountPod volumes to mount into the container's filesystem.
Cannot be updated.
+optional
+patchMergeKey=mountPath
+patchStrategy=merge
workingDirstringstringContainer's working directory.
If not specified, the container runtime's default will be used, which
might be configured in the container image.
Cannot be updated.
+optional
-

ContainerNode

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
args[]string[]stringArguments to the entrypoint.
The container image's CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
+optional
command[]string[]stringEntrypoint array. Not executed within a shell.
The container image's ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
+optional
dependencies[]string[]string
env[]EnvVar[]*EnvVarList of environment variables to set in the container.
Cannot be updated.
+optional
+patchMergeKey=name
+patchStrategy=merge
envFrom[]EnvFromSource[]*EnvFromSourceList of sources to populate environment variables in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.
+optional
imagestringstringContainer image name.
More info: https://kubernetes.io/docs/concepts/containers/images
This field is optional to allow higher level config management to default or override
container images in workload controllers like Deployments and StatefulSets.
+optional
imagePullPolicyPullPolicyPullPolicy
lifecycleLifecycleLifecycle
livenessProbeProbeProbe
namestringstringName of the container specified as a DNS_LABEL.
Each container in a pod must have a unique name (DNS_LABEL).
Cannot be updated.
ports[]ContainerPort[]*ContainerPortList of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network.
Cannot be updated.
+optional
+patchMergeKey=containerPort
+patchStrategy=merge
+listType=map
+listMapKey=containerPort
+listMapKey=protocol
readinessProbeProbeProbe
resourcesResourceRequirementsResourceRequirements
securityContextSecurityContextSecurityContext
startupProbeProbeProbe
stdinbooleanboolWhether this container should allocate a buffer for stdin in the container runtime. If this
is not set, reads from stdin in the container will always result in EOF.
Default is false.
+optional
stdinOncebooleanboolWhether the container runtime should close the stdin channel after it has been opened by
a single attach. When stdin is true the stdin stream will remain open across multiple attach
sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the
first client attaches to stdin, and then remains open and accepts data until the client disconnects,
at which time stdin is closed and remains closed until the container is restarted. If this
flag is false, a container processes that reads from stdin will never receive an EOF.
Default is false
+optional
terminationMessagePathstringstringOptional: Path at which the file to which the container's termination message
will be written is mounted into the container's filesystem.
Message written is intended to be brief final status, such as an assertion failure message.
Will be truncated by the node if greater than 4096 bytes. The total message length across
all containers will be limited to 12kb.
Defaults to /dev/termination-log.
Cannot be updated.
+optional
terminationMessagePolicyTerminationMessagePolicyTerminationMessagePolicy
ttybooleanboolWhether this container should allocate a TTY for itself, also requires 'stdin' to be true.
Default is false.
+optional
volumeDevices[]VolumeDevice[]*VolumeDevicevolumeDevices is the list of block devices to be used by the container.
+patchMergeKey=devicePath
+patchStrategy=merge
+optional
volumeMounts[]VolumeMount[]*VolumeMountPod volumes to mount into the container's filesystem.
Cannot be updated.
+optional
+patchMergeKey=mountPath
+patchStrategy=merge
workingDirstringstringContainer's working directory.
If not specified, the container runtime's default will be used, which
might be configured in the container image.
Cannot be updated.
+optional
-

ContainerPort

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
containerPortint32 (formatted integer)int32Number of port to expose on the pod's IP address.
This must be a valid port number, 0 < x < 65536.
hostIPstringstringWhat host IP to bind the external port to.
+optional
hostPortint32 (formatted integer)int32Number of port to expose on the host.
If specified, this must be a valid port number, 0 < x < 65536.
If HostNetwork is specified, this must match ContainerPort.
Most containers do not need this.
+optional
namestringstringIf specified, this must be an IANA_SVC_NAME and unique within the pod. Each
named port in a pod must have a unique name. Name for the port that can be
referred to by services.
+optional
protocolProtocolProtocol
-

ContainerSetRetryStrategy

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
durationstringstringDuration is the time between each retry, examples values are "300ms", "1s" or "5m".
Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
retriesIntOrStringIntOrString
-

ContainerSetTemplate

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
containers[]ContainerNode[]*ContainerNode
retryStrategyContainerSetRetryStrategyContainerSetRetryStrategy
volumeMounts[]VolumeMount[]*VolumeMount
-

ContinueOn

-
-

It can be specified if the workflow should continue when the pod errors, fails or both.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
errorbooleanbool+optional
failedbooleanbool+optional
-

Counter

-
-

Counter is a Counter prometheus metric

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
valuestringstringValue is the value of the metric
-

CreateS3BucketOptions

-
-

CreateS3BucketOptions options used to determine automatic automatic bucket-creation process

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
objectLockingbooleanboolObjectLocking Enable object locking
-

DAGTask

-
-

DAGTask represents a node in the graph during DAG execution

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
argumentsArgumentsArguments
continueOnContinueOnContinueOn
dependencies[]string[]stringDependencies are name of other targets which this depends on
dependsstringstringDepends are name of other targets which this depends on
hooksLifecycleHooksLifecycleHooks
inlineTemplateTemplate
namestringstringName is the name of the target
onExitstringstringOnExit is a template reference which is invoked at the end of the
template, irrespective of the success, failure, or error of the
primary template.
DEPRECATED: Use Hooks[exit].Template instead.
templatestringstringName of template to execute
templateRefTemplateRefTemplateRef
whenstringstringWhen is an expression in which the task should conditionally execute
withItems[]Item[]ItemWithItems expands a task into multiple parallel tasks from the items in the list
withParamstringstringWithParam expands a task into multiple parallel tasks from the value in the parameter,
which is expected to be a JSON list.
withSequenceSequenceSequence
-

DAGTemplate

-
-

DAGTemplate is a template subtype for directed acyclic graph templates

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
failFastbooleanboolThis flag is for DAG logic. The DAG logic has a built-in "fail fast" feature to stop scheduling new steps,
as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed
before failing the DAG itself.
The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to
completion (either success or failure), regardless of the failed outcomes of branches in the DAG.
More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442
targetstringstringTarget are one or more names of targets to execute in a DAG
tasks[]DAGTask[]*DAGTaskTasks are a list of DAG tasks
+patchStrategy=merge
+patchMergeKey=name
-

Data

-
-

Data is a data template

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
sourceDataSourceDataSource
transformationTransformationTransformation
-

DataSource

-
-

DataSource sources external data into a data template

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
artifactPathsArtifactPathsArtifactPaths
-

DownwardAPIProjection

-
-

Note that this is identical to a downwardAPI volume source without the default -mode.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
items[]DownwardAPIVolumeFile[]*DownwardAPIVolumeFileItems is a list of DownwardAPIVolume file
+optional
-

DownwardAPIVolumeFile

-
-

DownwardAPIVolumeFile represents information to create the file containing the pod field

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fieldRefObjectFieldSelectorObjectFieldSelector
modeint32 (formatted integer)int32Optional: mode bits used to set permissions on this file, must be an octal value
between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values for mode bits.
If not specified, the volume defaultMode will be used.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
+optional
pathstringstringRequired: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..'
resourceFieldRefResourceFieldSelectorResourceFieldSelector
-

DownwardAPIVolumeSource

-
-

Downward API volumes support ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
defaultModeint32 (formatted integer)int32Optional: mode bits to use on created files by default. Must be a
Optional: mode bits used to set permissions on created files by default.
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values for mode bits.
Defaults to 0644.
Directories within the path are not affected by this setting.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
+optional
items[]DownwardAPIVolumeFile[]*DownwardAPIVolumeFileItems is a list of downward API volume file
+optional
-

Duration

-
-

Duration is a wrapper around time.Duration which supports correct -marshaling to YAML and JSON. In particular, it marshals into strings, which -can be used as map keys in json.

-
-

interface{}

-

EmptyDirVolumeSource

-
-

Empty directory volumes support ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
mediumStorageMediumStorageMedium
sizeLimitQuantityQuantity
-

EnvFromSource

-
-

EnvFromSource represents the source of a set of ConfigMaps

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
configMapRefConfigMapEnvSourceConfigMapEnvSource
prefixstringstringAn optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
+optional
secretRefSecretEnvSourceSecretEnvSource
-

EnvVar

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
namestringstringName of the environment variable. Must be a C_IDENTIFIER.
valuestringstringVariable references $(VAR_NAME) are expanded
using the previously defined environment variables in the container and
any service environment variables. If a variable cannot be resolved,
the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e.
"$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)".
Escaped references will never be expanded, regardless of whether the variable
exists or not.
Defaults to "".
+optional
valueFromEnvVarSourceEnvVarSource
-

EnvVarSource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
configMapKeyRefConfigMapKeySelectorConfigMapKeySelector
fieldRefObjectFieldSelectorObjectFieldSelector
resourceFieldRefResourceFieldSelectorResourceFieldSelector
secretKeyRefSecretKeySelectorSecretKeySelector
-

EphemeralVolumeSource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
volumeClaimTemplatePersistentVolumeClaimTemplatePersistentVolumeClaimTemplate
-

ExecAction

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
command[]string[]stringCommand is the command line to execute inside the container, the working directory for the
command is root ('/') in the container's filesystem. The command is simply exec'd, it is
not run inside a shell, so traditional shell instructions ('', etc) won't work. To use
a shell, you need to explicitly call out to that shell.
Exit status of 0 is treated as live/healthy and non-zero is unhealthy.
+optional
-

ExecuteTemplateArgs

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
templateTemplateTemplate
workflowWorkflowWorkflow
-

ExecuteTemplateReply

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
nodeNodeResultNodeResult
requeueDurationDuration
-

ExecutorConfig

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
serviceAccountNamestringstringServiceAccountName specifies the service account name of the executor container.
-

FCVolumeSource

-
-

Fibre Channel volumes can only be mounted as read/write once. -Fibre Channel volumes support ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfsType is the filesystem type to mount.
Must be a filesystem type supported by the host operating system.
Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
TODO: how do we prevent errors in the filesystem from compromising the machine
+optional
lunint32 (formatted integer)int32lun is Optional: FC target lun number
+optional
readOnlybooleanboolreadOnly is Optional: Defaults to false (read/write). ReadOnly here will force
the ReadOnly setting in VolumeMounts.
+optional
targetWWNs[]string[]stringtargetWWNs is Optional: FC target worldwide names (WWNs)
+optional
wwids[]string[]stringwwids Optional: FC volume world wide identifiers (wwids)
Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.
+optional
-

FieldsV1

-
-

Each key is either a '.' representing the field itself, and will always map to an empty set, -or a string representing a sub-field or item. The string will follow one of these four formats: -'f:', where is the name of a field in a struct, or key in a map -'v:', where is the exact json formatted value of a list item -'i:', where is position of a item in a list -'k:', where is a map of a list item's key fields to their unique values -If a key maps to an empty Fields value, the field that key represents is part of the set.

-
-

The exact format is defined in sigs.k8s.io/structured-merge-diff -+protobuf.options.(gogoproto.goproto_stringer)=false

-

interface{}

-

FlexVolumeSource

-
-

FlexVolume represents a generic volume resource that is -provisioned/attached using an exec based plugin.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
driverstringstringdriver is the name of the driver to use for this volume.
fsTypestringstringfsType is the filesystem type to mount.
Must be a filesystem type supported by the host operating system.
Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script.
+optional
optionsmap of stringmap[string]stringoptions is Optional: this field holds extra command options if any.
+optional
readOnlybooleanboolreadOnly is Optional: defaults to false (read/write). ReadOnly here will force
the ReadOnly setting in VolumeMounts.
+optional
secretRefLocalObjectReferenceLocalObjectReference
-

FlockerVolumeSource

-
-

One and only one of datasetName and datasetUUID should be set. -Flocker volumes do not support ownership management or SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
datasetNamestringstringdatasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker
should be considered as deprecated
+optional
datasetUUIDstringstringdatasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset
+optional
-

GCEPersistentDiskVolumeSource

-
-

A GCE PD must exist before mounting to a container. The disk must -also be in the same GCE project and zone as the kubelet. A GCE PD -can only be mounted as read/write once or read-only many times. GCE -PDs support ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfsType is filesystem type of the volume that you want to mount.
Tip: Ensure that the filesystem type is supported by the host operating system.
Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
TODO: how do we prevent errors in the filesystem from compromising the machine
+optional
partitionint32 (formatted integer)int32partition is the partition in the volume that you want to mount.
If omitted, the default is to mount by volume name.
Examples: For volume /dev/sda1, you specify the partition as "1".
Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty).
More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
+optional
pdNamestringstringpdName is unique name of the PD resource in GCE. Used to identify the disk in GCE.
More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
readOnlybooleanboolreadOnly here will force the ReadOnly setting in VolumeMounts.
Defaults to false.
More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
+optional
-

GCSArtifact

-
-

GCSArtifact is the location of a GCS artifact

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
bucketstringstringBucket is the name of the bucket
keystringstringKey is the path in the bucket where the artifact resides
serviceAccountKeySecretSecretKeySelectorSecretKeySelector
-

GRPCAction

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
portint32 (formatted integer)int32Port number of the gRPC service. Number must be in the range 1 to 65535.
servicestringstringService is the name of the service to place in the gRPC HealthCheckRequest
(see https://github.com/grpc/grpc/blob/master/doc/health-checking.md).
-

If this is not specified, the default behavior is defined by gRPC. -+optional -+default="" | |

-

Gauge

-
-

Gauge is a Gauge prometheus metric

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
operationGaugeOperationGaugeOperation
realtimebooleanboolRealtime emits this metric in real time if applicable
valuestringstringValue is the value to be used in the operation with the metric's current value. If no operation is set,
value is the value of the metric
-

GaugeOperation

- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
GaugeOperationstringstring
-

GitArtifact

-
-

GitArtifact is the location of an git artifact

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
branchstringstringBranch is the branch to fetch when SingleBranch is enabled
depthuint64 (formatted integer)uint64Depth specifies clones/fetches should be shallow and include the given
number of commits from the branch tip
disableSubmodulesbooleanboolDisableSubmodules disables submodules during git clone
fetch[]string[]stringFetch specifies a number of refs that should be fetched before checkout
insecureIgnoreHostKeybooleanboolInsecureIgnoreHostKey disables SSH strict host key checking during git clone
passwordSecretSecretKeySelectorSecretKeySelector
repostringstringRepo is the git repository
revisionstringstringRevision is the git commit, tag, branch to checkout
singleBranchbooleanboolSingleBranch enables single branch clone, using the branch parameter
sshPrivateKeySecretSecretKeySelectorSecretKeySelector
usernameSecretSecretKeySelectorSecretKeySelector
-

GitRepoVolumeSource

-
-

DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an -EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir -into the Pod's container.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
directorystringstringdirectory is the target directory name.
Must not contain or start with '..'. If '.' is supplied, the volume directory will be the
git repository. Otherwise, if specified, the volume will contain the git repository in
the subdirectory with the given name.
+optional
repositorystringstringrepository is the URL
revisionstringstringrevision is the commit hash for the specified revision.
+optional
-

GlusterfsVolumeSource

-
-

Glusterfs volumes do not support ownership management or SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
endpointsstringstringendpoints is the endpoint name that details Glusterfs topology.
More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
pathstringstringpath is the Glusterfs volume path.
More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
readOnlybooleanboolreadOnly here will force the Glusterfs volume to be mounted with read-only permissions.
Defaults to false.
More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
+optional
-

HDFSArtifact

-
-

HDFSArtifact is the location of an HDFS artifact

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
addresses[]string[]stringAddresses is accessible addresses of HDFS name nodes
forcebooleanboolForce copies a file forcibly even if it exists
hdfsUserstringstringHDFSUser is the user to access HDFS file system.
It is ignored if either ccache or keytab is used.
krbCCacheSecretSecretKeySelectorSecretKeySelector
krbConfigConfigMapConfigMapKeySelectorConfigMapKeySelector
krbKeytabSecretSecretKeySelectorSecretKeySelector
krbRealmstringstringKrbRealm is the Kerberos realm used with Kerberos keytab
It must be set if keytab is used.
krbServicePrincipalNamestringstringKrbServicePrincipalName is the principal name of Kerberos service
It must be set if either ccache or keytab is used.
krbUsernamestringstringKrbUsername is the Kerberos username used with Kerberos keytab
It must be set if keytab is used.
pathstringstringPath is a file path in HDFS
-

HTTP

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
bodystringstringBody is content of the HTTP Request
bodyFromHTTPBodySourceHTTPBodySource
headersHTTPHeadersHTTPHeaders
insecureSkipVerifybooleanboolInsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client
methodstringstringMethod is HTTP methods for HTTP Request
successConditionstringstringSuccessCondition is an expression if evaluated to true is considered successful
timeoutSecondsint64 (formatted integer)int64TimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds
urlstringstringURL of the HTTP Request
-

HTTPArtifact

-
-

HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
authHTTPAuthHTTPAuth
headers[]Header[]*HeaderHeaders are an optional list of headers to send with HTTP requests for artifacts
urlstringstringURL of the artifact
-

HTTPAuth

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
basicAuthBasicAuthBasicAuth
clientCertClientCertAuthClientCertAuth
oauth2OAuth2AuthOAuth2Auth
-

HTTPBodySource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
bytes[]uint8 (formatted integer)[]uint8
-

HTTPGetAction

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
hoststringstringHost name to connect to, defaults to the pod IP. You probably want to set
"Host" in httpHeaders instead.
+optional
httpHeaders[]HTTPHeader[]*HTTPHeaderCustom headers to set in the request. HTTP allows repeated headers.
+optional
pathstringstringPath to access on the HTTP server.
+optional
portIntOrStringIntOrString
schemeURISchemeURIScheme
-

HTTPHeader

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
namestringstring
valuestringstring
valueFromHTTPHeaderSourceHTTPHeaderSource
-

HTTPHeaderSource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
secretKeyRefSecretKeySelectorSecretKeySelector
-

HTTPHeaders

-

[]HTTPHeader

- -
-

Header indicate a key-value request header to be used when fetching artifacts over HTTP

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
namestringstringName is the header name
valuestringstringValue is the literal value to use for the header
-

Histogram

-
-

Histogram is a Histogram prometheus metric

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
buckets[]Amount[]AmountBuckets is a list of bucket divisors for the histogram
valuestringstringValue is the value of the metric
-

HostAlias

-
-

HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the -pod's hosts file.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
hostnames[]string[]stringHostnames for the above IP address.
ipstringstringIP address of the host file entry.
-

HostPathType

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
HostPathTypestringstring+enum
-

HostPathVolumeSource

-
-

Host path volumes do not support ownership management or SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
pathstringstringpath of the directory on the host.
If the path is a symlink, it will follow the link to the real path.
More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
typeHostPathTypeHostPathType
-

ISCSIVolumeSource

-
-

ISCSI volumes can only be mounted as read/write once. -ISCSI volumes support ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
chapAuthDiscoverybooleanboolchapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication
+optional
chapAuthSessionbooleanboolchapAuthSession defines whether support iSCSI Session CHAP authentication
+optional
fsTypestringstringfsType is the filesystem type of the volume that you want to mount.
Tip: Ensure that the filesystem type is supported by the host operating system.
Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi
TODO: how do we prevent errors in the filesystem from compromising the machine
+optional
initiatorNamestringstringinitiatorName is the custom iSCSI Initiator Name.
If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface
: will be created for the connection.
+optional
iqnstringstringiqn is the target iSCSI Qualified Name.
iscsiInterfacestringstringiscsiInterface is the interface Name that uses an iSCSI transport.
Defaults to 'default' (tcp).
+optional
lunint32 (formatted integer)int32lun represents iSCSI Target Lun number.
portals[]string[]stringportals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port
is other than default (typically TCP ports 860 and 3260).
+optional
readOnlybooleanboolreadOnly here will force the ReadOnly setting in VolumeMounts.
Defaults to false.
+optional
secretRefLocalObjectReferenceLocalObjectReference
targetPortalstringstringtargetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port
is other than default (typically TCP ports 860 and 3260).
-

Inputs

-
-

Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
artifactsArtifactsArtifacts
parameters[]Parameter[]*ParameterParameters are a list of parameters passed as inputs
+patchStrategy=merge
+patchMergeKey=name
-

IntOrString

-
-

+protobuf=true -+protobuf.options.(gogoproto.goproto_stringer)=false -+k8s:openapi-gen=true

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
IntValint32 (formatted integer)int32
StrValstringstring
TypeTypeType
-

Item

-
-

+protobuf.options.(gogoproto.goproto_stringer)=false -+kubebuilder:validation:Type=object

-
-

interface{}

-

KeyToPath

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
keystringstringkey is the key to project.
modeint32 (formatted integer)int32mode is Optional: mode bits used to set permissions on this file.
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values for mode bits.
If not specified, the volume defaultMode will be used.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
+optional
pathstringstringpath is the relative path of the file to map the key to.
May not be an absolute path.
May not contain the path element '..'.
May not start with the string '..'.
-

LabelSelector

-
-

A label selector is a label query over a set of resources. The result of matchLabels and -matchExpressions are ANDed. An empty label selector matches all objects. A null -label selector matches no objects. -+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
matchExpressions[]LabelSelectorRequirement[]*LabelSelectorRequirementmatchExpressions is a list of label selector requirements. The requirements are ANDed.
+optional
matchLabelsmap of stringmap[string]stringmatchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels
map is equivalent to an element of matchExpressions, whose key field is "key", the
operator is "In", and the values array contains only "value". The requirements are ANDed.
+optional
-

LabelSelectorOperator

- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
LabelSelectorOperatorstringstring
-

LabelSelectorRequirement

-
-

A label selector requirement is a selector that contains values, a key, and an operator that -relates the key and values.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
keystringstringkey is the label key that the selector applies to.
+patchMergeKey=key
+patchStrategy=merge
operatorLabelSelectorOperatorLabelSelectorOperator
values[]string[]stringvalues is an array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. This array is replaced during a strategic
merge patch.
+optional
-

Lifecycle

-
-

Lifecycle describes actions that the management system should take in response to container lifecycle -events. For the PostStart and PreStop lifecycle handlers, management of the container blocks -until the action is complete, unless the container process fails, in which case the handler is aborted.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
postStartLifecycleHandlerLifecycleHandler
preStopLifecycleHandlerLifecycleHandler
-

LifecycleHandler

-
-

LifecycleHandler defines a specific action that should be taken in a lifecycle -hook. One and only one of the fields, except TCPSocket must be specified.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
execExecActionExecAction
httpGetHTTPGetActionHTTPGetAction
tcpSocketTCPSocketActionTCPSocketAction
-

LifecycleHook

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
argumentsArgumentsArguments
expressionstringstringExpression is a condition expression for when a node will be retried. If it evaluates to false, the node will not
be retried and the retry strategy will be ignored
templatestringstringTemplate is the name of the template to execute by the hook
templateRefTemplateRefTemplateRef
-

LifecycleHooks

-

LifecycleHooks

-

LocalObjectReference

-
-

LocalObjectReference contains enough information to let you locate the -referenced object inside the same namespace. -+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
namestringstringName of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?
+optional
-

ManagedFieldsEntry

-
-

ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource -that the fieldset applies to.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
apiVersionstringstringAPIVersion defines the version of this resource that this field set
applies to. The format is "group/version" just like the top-level
APIVersion field. It is necessary to track the version of a field
set because it cannot be automatically converted.
fieldsTypestringstringFieldsType is the discriminator for the different fields format and version.
There is currently only one possible value: "FieldsV1"
fieldsV1FieldsV1FieldsV1
managerstringstringManager is an identifier of the workflow managing these fields.
operationManagedFieldsOperationTypeManagedFieldsOperationType
subresourcestringstringSubresource is the name of the subresource used to update that object, or
empty string if the object was updated through the main resource. The
value of this field is used to distinguish between managers, even if they
share the same name. For example, a status update will be distinct from a
regular update using the same manager name.
Note that the APIVersion field is not related to the Subresource field and
it always corresponds to the version of the main resource.
timeTimeTime
-

ManagedFieldsOperationType

- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
ManagedFieldsOperationTypestringstring
-

ManifestFrom

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
artifactArtifactArtifact
-

Memoize

-
-

Memoization enables caching for the Outputs of the template

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
cacheCacheCache
keystringstringKey is the key to use as the caching key
maxAgestringstringMaxAge is the maximum age (e.g. "180s", "24h") of an entry that is still considered valid. If an entry is older
than the MaxAge, it will be ignored.
-

Metadata

-
-

Pod metdata

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
annotationsmap of stringmap[string]string
labelsmap of stringmap[string]string
-

MetricLabel

-
-

MetricLabel is a single label for a prometheus metric

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
keystringstring
valuestringstring
-

Metrics

-
-

Metrics are a list of metrics emitted from a Workflow/Template

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
prometheus[]Prometheus[]*PrometheusPrometheus is a list of prometheus metrics to be emitted
-

MountPropagationMode

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
MountPropagationModestringstring+enum
-

Mutex

-
-

Mutex holds Mutex configuration

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
namestringstringname of the mutex
namespacestringstring"[namespace of workflow]"
-

NFSVolumeSource

-
-

NFS volumes do not support ownership management or SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
pathstringstringpath that is exported by the NFS server.
More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
readOnlybooleanboolreadOnly here will force the NFS export to be mounted with read-only permissions.
Defaults to false.
More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
+optional
serverstringstringserver is the hostname or IP address of the NFS server.
More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
-

NodeAffinity

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
preferredDuringSchedulingIgnoredDuringExecution[]PreferredSchedulingTerm[]*PreferredSchedulingTermThe scheduler will prefer to schedule pods to nodes that satisfy
the affinity expressions specified by this field, but it may choose
a node that violates one or more of the expressions. The node that is
most preferred is the one with the greatest sum of weights, i.e.
for each node that meets all of the scheduling requirements (resource
request, requiredDuringScheduling affinity expressions, etc.),
compute a sum by iterating through the elements of this field and adding
"weight" to the sum if the node matches the corresponding matchExpressions; the
node(s) with the highest sum are the most preferred.
+optional
requiredDuringSchedulingIgnoredDuringExecutionNodeSelectorNodeSelector
-

NodePhase

- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
NodePhasestringstring
-

NodeResult

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
messagestringstring
outputsOutputsOutputs
phaseNodePhaseNodePhase
progressProgressProgress
-

NodeSelector

-
-

A node selector represents the union of the results of one or more label queries -over a set of nodes; that is, it represents the OR of the selectors represented -by the node selector terms. -+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
nodeSelectorTerms[]NodeSelectorTerm[]*NodeSelectorTermRequired. A list of node selector terms. The terms are ORed.
-

NodeSelectorOperator

-
-

A node selector operator is the set of operators that can be used in -a node selector requirement. -+enum

-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
NodeSelectorOperatorstringstringA node selector operator is the set of operators that can be used in
a node selector requirement.
+enum
-

NodeSelectorRequirement

-
-

A node selector requirement is a selector that contains values, a key, and an operator -that relates the key and values.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
keystringstringThe label key that the selector applies to.
operatorNodeSelectorOperatorNodeSelectorOperator
values[]string[]stringAn array of string values. If the operator is In or NotIn,
the values array must be non-empty. If the operator is Exists or DoesNotExist,
the values array must be empty. If the operator is Gt or Lt, the values
array must have a single element, which will be interpreted as an integer.
This array is replaced during a strategic merge patch.
+optional
-

NodeSelectorTerm

-
-

A null or empty node selector term matches no objects. The requirements of -them are ANDed. -The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. -+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
matchExpressions[]NodeSelectorRequirement[]*NodeSelectorRequirementA list of node selector requirements by node's labels.
+optional
matchFields[]NodeSelectorRequirement[]*NodeSelectorRequirementA list of node selector requirements by node's fields.
+optional
-

NoneStrategy

-
-

NoneStrategy indicates to skip tar process and upload the files or directory tree as independent -files. Note that if the artifact is a directory, the artifact driver must support the ability to -save/load the directory appropriately.

-
-

interface{}

-

OAuth2Auth

-
-

OAuth2Auth holds all information for client authentication via OAuth2 tokens

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
clientIDSecretSecretKeySelectorSecretKeySelector
clientSecretSecretSecretKeySelectorSecretKeySelector
endpointParams[]OAuth2EndpointParam[]*OAuth2EndpointParam
scopes[]string[]string
tokenURLSecretSecretKeySelectorSecretKeySelector
-

OAuth2EndpointParam

-
-

EndpointParam is for requesting optional fields that should be sent in the oauth request

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
keystringstringName is the header name
valuestringstringValue is the literal value to use for the header
-

OSSArtifact

-
-

OSSArtifact is the location of an Alibaba Cloud OSS artifact

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
accessKeySecretSecretKeySelectorSecretKeySelector
bucketstringstringBucket is the name of the bucket
createBucketIfNotPresentbooleanboolCreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist
endpointstringstringEndpoint is the hostname of the bucket endpoint
keystringstringKey is the path in the bucket where the artifact resides
lifecycleRuleOSSLifecycleRuleOSSLifecycleRule
secretKeySecretSecretKeySelectorSecretKeySelector
securityTokenstringstringSecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm
useSDKCredsbooleanboolUseSDKCreds tells the driver to figure out credentials based on sdk defaults.
-

OSSLifecycleRule

-
-

OSSLifecycleRule specifies how to manage bucket's lifecycle

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
markDeletionAfterDaysint32 (formatted integer)int32MarkDeletionAfterDays is the number of days before we delete objects in the bucket
markInfrequentAccessAfterDaysint32 (formatted integer)int32MarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type
-

ObjectFieldSelector

-
-

+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
apiVersionstringstringVersion of the schema the FieldPath is written in terms of, defaults to "v1".
+optional
fieldPathstringstringPath of the field to select in the specified API version.
-

ObjectMeta

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
namestringstring
namespacestringstring
uidstringstring
-

Outputs

-
-

Outputs hold parameters, artifacts, and results from a step

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
artifactsArtifactsArtifacts
exitCodestringstringExitCode holds the exit code of a script template
parameters[]Parameter[]*ParameterParameters holds the list of output parameters produced by a step
+patchStrategy=merge
+patchMergeKey=name
resultstringstringResult holds the result (stdout) of a script template
-

OwnerReference

-
-

OwnerReference contains enough information to let you identify an owning -object. An owning object must be in the same namespace as the dependent, or -be cluster-scoped, so there is no namespace field. -+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
apiVersionstringstringAPI version of the referent.
blockOwnerDeletionbooleanboolIf true, AND if the owner has the "foregroundDeletion" finalizer, then
the owner cannot be deleted from the key-value store until this
reference is removed.
See https://kubernetes.io/docs/concepts/architecture/garbage-collection/#foreground-deletion
for how the garbage collector interacts with this field and enforces the foreground deletion.
Defaults to false.
To set this field, a user needs "delete" permission of the owner,
otherwise 422 (Unprocessable Entity) will be returned.
+optional
controllerbooleanboolIf true, this reference points to the managing controller.
+optional
kindstringstringKind of the referent.
More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
namestringstringName of the referent.
More info: http://kubernetes.io/docs/user-guide/identifiers#names
uidUIDUID
-

ParallelSteps

-
-

+kubebuilder:validation:Type=array

-
-

interface{}

-

Parameter

-
-

Parameter indicate a passed string parameter to a service template with an optional default value

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
defaultAnyStringAnyString
descriptionAnyStringAnyString
enum[]AnyString[]AnyStringEnum holds a list of string values to choose from, for the actual value of the parameter
globalNamestringstringGlobalName exports an output parameter to the global scope, making it available as
'{{workflow.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters
namestringstringName is the parameter name
valueAnyStringAnyString
valueFromValueFromValueFrom
-

PersistentVolumeAccessMode

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
PersistentVolumeAccessModestringstring+enum
-

PersistentVolumeClaimSpec

-
-

PersistentVolumeClaimSpec describes the common attributes of storage devices -and allows a Source for provider-specific attributes

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
accessModes[]PersistentVolumeAccessMode[]PersistentVolumeAccessModeaccessModes contains the desired access modes the volume should have.
More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
+optional
dataSourceTypedLocalObjectReferenceTypedLocalObjectReference
dataSourceRefTypedLocalObjectReferenceTypedLocalObjectReference
resourcesResourceRequirementsResourceRequirements
selectorLabelSelectorLabelSelector
storageClassNamestringstringstorageClassName is the name of the StorageClass required by the claim.
More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
+optional
volumeModePersistentVolumeModePersistentVolumeMode
volumeNamestringstringvolumeName is the binding reference to the PersistentVolume backing this claim.
+optional
-

PersistentVolumeClaimTemplate

-
-

PersistentVolumeClaimTemplate is used to produce -PersistentVolumeClaim objects as part of an EphemeralVolumeSource.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
annotationsmap of stringmap[string]stringAnnotations is an unstructured key value map stored with a resource that may be
set by external tools to store and retrieve arbitrary metadata. They are not
queryable and should be preserved when modifying objects.
More info: http://kubernetes.io/docs/user-guide/annotations
+optional
clusterNamestringstringDeprecated: ClusterName is a legacy field that was always cleared by
the system and never used; it will be removed completely in 1.25.
-

The name in the go struct is changed to help clients detect -accidental use.

-

+optional | | -| creationTimestamp | Time| Time | | | | | -| deletionGracePeriodSeconds | int64 (formatted integer)| int64 | | | Number of seconds allowed for this object to gracefully terminate before -it will be removed from the system. Only set when deletionTimestamp is also set. -May only be shortened. -Read-only. -+optional | | -| deletionTimestamp | Time| Time | | | | | -| finalizers | []string| []string | | | Must be empty before the object is deleted from the registry. Each entry -is an identifier for the responsible component that will remove the entry -from the list. If the deletionTimestamp of the object is non-nil, entries -in this list can only be removed. -Finalizers may be processed and removed in any order. Order is NOT enforced -because it introduces significant risk of stuck finalizers. -finalizers is a shared field, any actor with permission can reorder it. -If the finalizer list is processed in order, then this can lead to a situation -in which the component responsible for the first finalizer in the list is -waiting for a signal (field value, external system, or other) produced by a -component responsible for a finalizer later in the list, resulting in a deadlock. -Without enforced ordering finalizers are free to order amongst themselves and -are not vulnerable to ordering changes in the list. -+optional -+patchStrategy=merge | | -| generateName | string| string | | | GenerateName is an optional prefix, used by the server, to generate a unique -name ONLY IF the Name field has not been provided. -If this field is used, the name returned to the client will be different -than the name passed. This value will also be combined with a unique suffix. -The provided value has the same validation rules as the Name field, -and may be truncated by the length of the suffix required to make the value -unique on the server.

-

If this field is specified and the generated name exists, the server will return a 409.

-

Applied only if Name is not specified. -More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency -+optional | | -| generation | int64 (formatted integer)| int64 | | | A sequence number representing a specific generation of the desired state. -Populated by the system. Read-only. -+optional | | -| labels | map of string| map[string]string | | | Map of string keys and values that can be used to organize and categorize -(scope and select) objects. May match selectors of replication controllers -and services. -More info: http://kubernetes.io/docs/user-guide/labels -+optional | | -| managedFields | []ManagedFieldsEntry| []*ManagedFieldsEntry | | | ManagedFields maps workflow-id and version to the set of fields -that are managed by that workflow. This is mostly for internal -housekeeping, and users typically shouldn't need to set or -understand this field. A workflow can be the user's name, a -controller's name, or the name of a specific apply path like -"ci-cd". The set of fields is always in the version that the -workflow used when modifying the object.

-

+optional | | -| name | string| string | | | Name must be unique within a namespace. Is required when creating resources, although -some resources may allow a client to request the generation of an appropriate name -automatically. Name is primarily intended for creation idempotence and configuration -definition. -Cannot be updated. -More info: http://kubernetes.io/docs/user-guide/identifiers#names -+optional | | -| namespace | string| string | | | Namespace defines the space within which each name must be unique. An empty namespace is -equivalent to the "default" namespace, but "default" is the canonical representation. -Not all objects are required to be scoped to a namespace - the value of this field for -those objects will be empty.

-

Must be a DNS_LABEL. -Cannot be updated. -More info: http://kubernetes.io/docs/user-guide/namespaces -+optional | | -| ownerReferences | []OwnerReference| []*OwnerReference | | | List of objects depended by this object. If ALL objects in the list have -been deleted, this object will be garbage collected. If this object is managed by a controller, -then an entry in this list will point to this controller, with the controller field set to true. -There cannot be more than one managing controller. -+optional -+patchMergeKey=uid -+patchStrategy=merge | | -| resourceVersion | string| string | | | An opaque value that represents the internal version of this object that can -be used by clients to determine when objects have changed. May be used for optimistic -concurrency, change detection, and the watch operation on a resource or set of resources. -Clients must treat these values as opaque and passed unmodified back to the server. -They may only be valid for a particular resource or set of resources.

-

Populated by the system. -Read-only. -Value must be treated as opaque by clients and . -More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency -+optional | | -| selfLink | string| string | | | Deprecated: selfLink is a legacy read-only field that is no longer populated by the system. -+optional | | -| spec | PersistentVolumeClaimSpec| PersistentVolumeClaimSpec | | | | | -| uid | UID| UID | | | | |

-

PersistentVolumeClaimVolumeSource

-
-

This volume finds the bound PV and mounts that volume for the pod. A -PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another -type of volume that is owned by someone else (the system).

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
claimNamestringstringclaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume.
More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
readOnlybooleanboolreadOnly Will force the ReadOnly setting in VolumeMounts.
Default false.
+optional
-

PersistentVolumeMode

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
PersistentVolumeModestringstring+enum
-

PhotonPersistentDiskVolumeSource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfsType is the filesystem type to mount.
Must be a filesystem type supported by the host operating system.
Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
pdIDstringstringpdID is the ID that identifies Photon Controller persistent disk
-

Plugin

-
-

Plugin is an Object with exactly one key

-
-

interface{}

-

PodAffinity

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
preferredDuringSchedulingIgnoredDuringExecution[]WeightedPodAffinityTerm[]*WeightedPodAffinityTermThe scheduler will prefer to schedule pods to nodes that satisfy
the affinity expressions specified by this field, but it may choose
a node that violates one or more of the expressions. The node that is
most preferred is the one with the greatest sum of weights, i.e.
for each node that meets all of the scheduling requirements (resource
request, requiredDuringScheduling affinity expressions, etc.),
compute a sum by iterating through the elements of this field and adding
"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the
node(s) with the highest sum are the most preferred.
+optional
requiredDuringSchedulingIgnoredDuringExecution[]PodAffinityTerm[]*PodAffinityTermIf the affinity requirements specified by this field are not met at
scheduling time, the pod will not be scheduled onto the node.
If the affinity requirements specified by this field cease to be met
at some point during pod execution (e.g. due to a pod label update), the
system may or may not try to eventually evict the pod from its node.
When there are multiple elements, the lists of nodes corresponding to each
podAffinityTerm are intersected, i.e. all terms must be satisfied.
+optional
-

PodAffinityTerm

-
-

Defines a set of pods (namely those matching the labelSelector -relative to the given namespace(s)) that this pod should be -co-located (affinity) or not co-located (anti-affinity) with, -where co-located is defined as running on a node whose value of -the label with key matches that of any node on which -a pod of the set of pods is running

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
labelSelectorLabelSelectorLabelSelector
namespaceSelectorLabelSelectorLabelSelector
namespaces[]string[]stringnamespaces specifies a static list of namespace names that the term applies to.
The term is applied to the union of the namespaces listed in this field
and the ones selected by namespaceSelector.
null or empty namespaces list and null namespaceSelector means "this pod's namespace".
+optional
topologyKeystringstringThis pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching
the labelSelector in the specified namespaces, where co-located is defined as running on a node
whose value of the label with key topologyKey matches that of any node on which any of the
selected pods is running.
Empty topologyKey is not allowed.
-

PodAntiAffinity

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
preferredDuringSchedulingIgnoredDuringExecution[]WeightedPodAffinityTerm[]*WeightedPodAffinityTermThe scheduler will prefer to schedule pods to nodes that satisfy
the anti-affinity expressions specified by this field, but it may choose
a node that violates one or more of the expressions. The node that is
most preferred is the one with the greatest sum of weights, i.e.
for each node that meets all of the scheduling requirements (resource
request, requiredDuringScheduling anti-affinity expressions, etc.),
compute a sum by iterating through the elements of this field and adding
"weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the
node(s) with the highest sum are the most preferred.
+optional
requiredDuringSchedulingIgnoredDuringExecution[]PodAffinityTerm[]*PodAffinityTermIf the anti-affinity requirements specified by this field are not met at
scheduling time, the pod will not be scheduled onto the node.
If the anti-affinity requirements specified by this field cease to be met
at some point during pod execution (e.g. due to a pod label update), the
system may or may not try to eventually evict the pod from its node.
When there are multiple elements, the lists of nodes corresponding to each
podAffinityTerm are intersected, i.e. all terms must be satisfied.
+optional
-

PodFSGroupChangePolicy

-
-

PodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume -when volume is mounted. -+enum

-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
PodFSGroupChangePolicystringstringPodFSGroupChangePolicy holds policies that will be used for applying fsGroup to a volume
when volume is mounted.
+enum
-

PodSecurityContext

-
-

Some fields are also present in container.securityContext. Field values of -container.securityContext take precedence over field values of PodSecurityContext.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsGroupint64 (formatted integer)int64A special supplemental group that applies to all containers in a pod.
Some volume types allow the Kubelet to change the ownership of that volume
to be owned by the pod:
-
    -
  1. The owning GID will be the FSGroup
  2. -
  3. The setgid bit is set (new files created in the volume will be owned by FSGroup)
  4. -
  5. The permission bits are OR'd with rw-rw----
  6. -
-

If unset, the Kubelet will not modify the ownership and permissions of any volume. -Note that this field cannot be set when spec.os.name is windows. -+optional | | -| fsGroupChangePolicy | PodFSGroupChangePolicy| PodFSGroupChangePolicy | | | | | -| runAsGroup | int64 (formatted integer)| int64 | | | The GID to run the entrypoint of the container process. -Uses runtime default if unset. -May also be set in SecurityContext. If set in both SecurityContext and -PodSecurityContext, the value specified in SecurityContext takes precedence -for that container. -Note that this field cannot be set when spec.os.name is windows. -+optional | | -| runAsNonRoot | boolean| bool | | | Indicates that the container must run as a non-root user. -If true, the Kubelet will validate the image at runtime to ensure that it -does not run as UID 0 (root) and fail to start the container if it does. -If unset or false, no such validation will be performed. -May also be set in SecurityContext. If set in both SecurityContext and -PodSecurityContext, the value specified in SecurityContext takes precedence. -+optional | | -| runAsUser | int64 (formatted integer)| int64 | | | The UID to run the entrypoint of the container process. -Defaults to user specified in image metadata if unspecified. -May also be set in SecurityContext. If set in both SecurityContext and -PodSecurityContext, the value specified in SecurityContext takes precedence -for that container. -Note that this field cannot be set when spec.os.name is windows. -+optional | | -| seLinuxOptions | SELinuxOptions| SELinuxOptions | | | | | -| seccompProfile | SeccompProfile| SeccompProfile | | | | | -| supplementalGroups | []int64 (formatted integer)| []int64 | | | A list of groups applied to the first process run in each container, in addition -to the container's primary GID. If unspecified, no groups will be added to -any container. -Note that this field cannot be set when spec.os.name is windows. -+optional | | -| sysctls | []Sysctl| []*Sysctl | | | Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported -sysctls (by the container runtime) might fail to launch. -Note that this field cannot be set when spec.os.name is windows. -+optional | | -| windowsOptions | WindowsSecurityContextOptions| WindowsSecurityContextOptions | | | | |

-

PortworxVolumeSource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfSType represents the filesystem type to mount
Must be a filesystem type supported by the host operating system.
Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified.
readOnlybooleanboolreadOnly defaults to false (read/write). ReadOnly here will force
the ReadOnly setting in VolumeMounts.
+optional
volumeIDstringstringvolumeID uniquely identifies a Portworx volume
-

PreferredSchedulingTerm

-
-

An empty preferred scheduling term matches all objects with implicit weight 0 -(i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
preferenceNodeSelectorTermNodeSelectorTerm
weightint32 (formatted integer)int32Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
-

Probe

-
-

Probe describes a health check to be performed against a container to determine whether it is -alive or ready to receive traffic.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
execExecActionExecAction
failureThresholdint32 (formatted integer)int32Minimum consecutive failures for the probe to be considered failed after having succeeded.
Defaults to 3. Minimum value is 1.
+optional
grpcGRPCActionGRPCAction
httpGetHTTPGetActionHTTPGetAction
initialDelaySecondsint32 (formatted integer)int32Number of seconds after the container has started before liveness probes are initiated.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+optional
periodSecondsint32 (formatted integer)int32How often (in seconds) to perform the probe.
Default to 10 seconds. Minimum value is 1.
+optional
successThresholdint32 (formatted integer)int32Minimum consecutive successes for the probe to be considered successful after having failed.
Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
+optional
tcpSocketTCPSocketActionTCPSocketAction
terminationGracePeriodSecondsint64 (formatted integer)int64Optional duration in seconds the pod needs to terminate gracefully upon probe failure.
The grace period is the duration in seconds after the processes running in the pod are sent
a termination signal and the time when the processes are forcibly halted with a kill signal.
Set this value longer than the expected cleanup time for your process.
If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this
value overrides the value provided by the pod spec.
Value must be non-negative integer. The value zero indicates stop immediately via
the kill signal (no opportunity to shut down).
This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate.
Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
+optional
timeoutSecondsint32 (formatted integer)int32Number of seconds after which the probe times out.
Defaults to 1 second. Minimum value is 1.
More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+optional
-

ProcMountType

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
ProcMountTypestringstring+enum
-

Progress

- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
Progressstringstring
-

ProjectedVolumeSource

-
-

Represents a projected volume source

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
defaultModeint32 (formatted integer)int32defaultMode are the mode bits used to set permissions on created files by default.
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values for mode bits.
Directories within the path are not affected by this setting.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
+optional
sources[]VolumeProjection[]*VolumeProjectionsources is the list of volume projections
+optional
-

Prometheus

-
-

Prometheus is a prometheus metric to be emitted

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
counterCounterCounter
gaugeGaugeGauge
helpstringstringHelp is a string that describes the metric
histogramHistogramHistogram
labels[]MetricLabel[]*MetricLabelLabels is a list of metric labels
namestringstringName is the name of the metric
whenstringstringWhen is a conditional statement that decides when to emit the metric
-

Protocol

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
Protocolstringstring+enum
-

PullPolicy

-
-

PullPolicy describes a policy for if/when to pull a container image -+enum

-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
PullPolicystringstringPullPolicy describes a policy for if/when to pull a container image
+enum
-

Quantity

-
-

The serialization format is:

-
-

::= -(Note that may be empty, from the "" case in .) - ::= 0 | 1 | ... | 9 - ::= | - ::= | . | . | . - ::= "+" | "-" - ::= | - ::= | | - ::= Ki | Mi | Gi | Ti | Pi | Ei -(International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) - ::= m | "" | k | M | G | T | P | E -(Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) - ::= "e" | "E"

-

No matter which of the three exponent forms is used, no quantity may represent -a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal -places. Numbers larger or more precise will be capped or rounded up. -(E.g.: 0.1m will rounded up to 1m.) -This may be extended in the future if we require larger or smaller quantities.

-

When a Quantity is parsed from a string, it will remember the type of suffix -it had, and will use the same type again when it is serialized.

-

Before serializing, Quantity will be put in "canonical form". -This means that Exponent/suffix will be adjusted up or down (with a -corresponding increase or decrease in Mantissa) such that: -a. No precision is lost -b. No fractional digits will be emitted -c. The exponent (or suffix) is as large as possible. -The sign will be omitted unless the number is negative.

-

Examples: -1.5 will be serialized as "1500m" -1.5Gi will be serialized as "1536Mi"

-

Note that the quantity will NEVER be internally represented by a -floating point number. That is the whole point of this exercise.

-

Non-canonical values will still parse as long as they are well formed, -but will be re-emitted in their canonical form. (So always use canonical -form, or don't diff.)

-

This format is intended to make it difficult to use these numbers without -writing some sort of special handling code in the hopes that that will -cause implementors to also use a fixed point implementation.

-

+protobuf=true -+protobuf.embed=string -+protobuf.options.marshal=false -+protobuf.options.(gogoproto.goproto_stringer)=false -+k8s:deepcopy-gen=true -+k8s:openapi-gen=true

-

interface{}

-

QuobyteVolumeSource

-
-

Quobyte volumes do not support ownership management or SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
groupstringstringgroup to map volume access to
Default is no group
+optional
readOnlybooleanboolreadOnly here will force the Quobyte volume to be mounted with read-only permissions.
Defaults to false.
+optional
registrystringstringregistry represents a single or multiple Quobyte Registry services
specified as a string as host:port pair (multiple entries are separated with commas)
which acts as the central registry for volumes
tenantstringstringtenant owning the given Quobyte volume in the Backend
Used with dynamically provisioned Quobyte volumes, value is set by the plugin
+optional
userstringstringuser to map volume access to
Defaults to serivceaccount user
+optional
volumestringstringvolume is a string that references an already created Quobyte volume by name.
-

RBDVolumeSource

-
-

RBD volumes support ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfsType is the filesystem type of the volume that you want to mount.
Tip: Ensure that the filesystem type is supported by the host operating system.
Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd
TODO: how do we prevent errors in the filesystem from compromising the machine
+optional
imagestringstringimage is the rados image name.
More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
keyringstringstringkeyring is the path to key ring for RBDUser.
Default is /etc/ceph/keyring.
More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
+optional
monitors[]string[]stringmonitors is a collection of Ceph monitors.
More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
poolstringstringpool is the rados pool name.
Default is rbd.
More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
+optional
readOnlybooleanboolreadOnly here will force the ReadOnly setting in VolumeMounts.
Defaults to false.
More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
+optional
secretRefLocalObjectReferenceLocalObjectReference
userstringstringuser is the rados user name.
Default is admin.
More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
+optional
-

RawArtifact

-
-

RawArtifact allows raw string content to be placed as an artifact in a container

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
datastringstringData is the string contents of the artifact
-

ResourceFieldSelector

-
-

ResourceFieldSelector represents container resources (cpu, memory) and their output format -+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
containerNamestringstringContainer name: required for volumes, optional for env vars
+optional
divisorQuantityQuantity
resourcestringstringRequired: resource to select
-

ResourceList

-

ResourceList

-

ResourceRequirements

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
limitsResourceListResourceList
requestsResourceListResourceList
-

ResourceTemplate

-
-

ResourceTemplate is a template subtype to manipulate kubernetes resources

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
actionstringstringAction is the action to perform to the resource.
Must be one of: get, create, apply, delete, replace, patch
failureConditionstringstringFailureCondition is a label selector expression which describes the conditions
of the k8s resource in which the step was considered failed
flags[]string[]stringFlags is a set of additional options passed to kubectl before submitting a resource
I.e. to disable resource validation:
flags: [
"--validate=false" # disable resource validation
]
manifeststringstringManifest contains the kubernetes manifest
manifestFromManifestFromManifestFrom
mergeStrategystringstringMergeStrategy is the strategy used to merge a patch. It defaults to "strategic"
Must be one of: strategic, merge, json
setOwnerReferencebooleanboolSetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource.
successConditionstringstringSuccessCondition is a label selector expression which describes the conditions
of the k8s resource in which it is acceptable to proceed to the following step
-

RetryAffinity

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
nodeAntiAffinityRetryNodeAntiAffinityRetryNodeAntiAffinity
-

RetryNodeAntiAffinity

-
-

In order to prevent running steps on the same host, it uses "kubernetes.io/hostname".

-
-

interface{}

-

RetryPolicy

- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
RetryPolicystringstring
-

RetryStrategy

-
-

RetryStrategy provides controls on how to retry a workflow step

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
affinityRetryAffinityRetryAffinity
backoffBackoffBackoff
expressionstringstringExpression is a condition expression for when a node will be retried. If it evaluates to false, the node will not
be retried and the retry strategy will be ignored
limitIntOrStringIntOrString
retryPolicyRetryPolicyRetryPolicy
-

S3Artifact

-
-

S3Artifact is the location of an S3 artifact

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
accessKeySecretSecretKeySelectorSecretKeySelector
bucketstringstringBucket is the name of the bucket
caSecretSecretKeySelectorSecretKeySelector
createBucketIfNotPresentCreateS3BucketOptionsCreateS3BucketOptions
encryptionOptionsS3EncryptionOptionsS3EncryptionOptions
endpointstringstringEndpoint is the hostname of the bucket endpoint
insecurebooleanboolInsecure will connect to the service with TLS
keystringstringKey is the key in the bucket where the artifact resides
regionstringstringRegion contains the optional bucket region
roleARNstringstringRoleARN is the Amazon Resource Name (ARN) of the role to assume.
secretKeySecretSecretKeySelectorSecretKeySelector
useSDKCredsbooleanboolUseSDKCreds tells the driver to figure out credentials based on sdk defaults.
-

S3EncryptionOptions

-
-

S3EncryptionOptions used to determine encryption options during s3 operations

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
enableEncryptionbooleanboolEnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used
kmsEncryptionContextstringstringKmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information
kmsKeyIdstringstringKMSKeyId tells the driver to encrypt the object using the specified KMS Key.
serverSideCustomerKeySecretSecretKeySelectorSecretKeySelector
-

SELinuxOptions

-
-

SELinuxOptions are the labels to be applied to the container

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
levelstringstringLevel is SELinux level label that applies to the container.
+optional
rolestringstringRole is a SELinux role label that applies to the container.
+optional
typestringstringType is a SELinux type label that applies to the container.
+optional
userstringstringUser is a SELinux user label that applies to the container.
+optional
-

ScaleIOVolumeSource

-
-

ScaleIOVolumeSource represents a persistent ScaleIO volume

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfsType is the filesystem type to mount.
Must be a filesystem type supported by the host operating system.
Ex. "ext4", "xfs", "ntfs".
Default is "xfs".
+optional
gatewaystringstringgateway is the host address of the ScaleIO API Gateway.
protectionDomainstringstringprotectionDomain is the name of the ScaleIO Protection Domain for the configured storage.
+optional
readOnlybooleanboolreadOnly Defaults to false (read/write). ReadOnly here will force
the ReadOnly setting in VolumeMounts.
+optional
secretRefLocalObjectReferenceLocalObjectReference
sslEnabledbooleanboolsslEnabled Flag enable/disable SSL communication with Gateway, default false
+optional
storageModestringstringstorageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned.
Default is ThinProvisioned.
+optional
storagePoolstringstringstoragePool is the ScaleIO Storage Pool associated with the protection domain.
+optional
systemstringstringsystem is the name of the storage system as configured in ScaleIO.
volumeNamestringstringvolumeName is the name of a volume already created in the ScaleIO system
that is associated with this volume source.
-

ScriptTemplate

-
-

ScriptTemplate is a template subtype to enable scripting through code steps

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
args[]string[]stringArguments to the entrypoint.
The container image's CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
+optional
command[]string[]stringEntrypoint array. Not executed within a shell.
The container image's ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
+optional
env[]EnvVar[]*EnvVarList of environment variables to set in the container.
Cannot be updated.
+optional
+patchMergeKey=name
+patchStrategy=merge
envFrom[]EnvFromSource[]*EnvFromSourceList of sources to populate environment variables in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.
+optional
imagestringstringContainer image name.
More info: https://kubernetes.io/docs/concepts/containers/images
This field is optional to allow higher level config management to default or override
container images in workload controllers like Deployments and StatefulSets.
+optional
imagePullPolicyPullPolicyPullPolicy
lifecycleLifecycleLifecycle
livenessProbeProbeProbe
namestringstringName of the container specified as a DNS_LABEL.
Each container in a pod must have a unique name (DNS_LABEL).
Cannot be updated.
ports[]ContainerPort[]*ContainerPortList of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network.
Cannot be updated.
+optional
+patchMergeKey=containerPort
+patchStrategy=merge
+listType=map
+listMapKey=containerPort
+listMapKey=protocol
readinessProbeProbeProbe
resourcesResourceRequirementsResourceRequirements
securityContextSecurityContextSecurityContext
sourcestringstringSource contains the source code of the script to execute
startupProbeProbeProbe
stdinbooleanboolWhether this container should allocate a buffer for stdin in the container runtime. If this
is not set, reads from stdin in the container will always result in EOF.
Default is false.
+optional
stdinOncebooleanboolWhether the container runtime should close the stdin channel after it has been opened by
a single attach. When stdin is true the stdin stream will remain open across multiple attach
sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the
first client attaches to stdin, and then remains open and accepts data until the client disconnects,
at which time stdin is closed and remains closed until the container is restarted. If this
flag is false, a container processes that reads from stdin will never receive an EOF.
Default is false
+optional
terminationMessagePathstringstringOptional: Path at which the file to which the container's termination message
will be written is mounted into the container's filesystem.
Message written is intended to be brief final status, such as an assertion failure message.
Will be truncated by the node if greater than 4096 bytes. The total message length across
all containers will be limited to 12kb.
Defaults to /dev/termination-log.
Cannot be updated.
+optional
terminationMessagePolicyTerminationMessagePolicyTerminationMessagePolicy
ttybooleanboolWhether this container should allocate a TTY for itself, also requires 'stdin' to be true.
Default is false.
+optional
volumeDevices[]VolumeDevice[]*VolumeDevicevolumeDevices is the list of block devices to be used by the container.
+patchMergeKey=devicePath
+patchStrategy=merge
+optional
volumeMounts[]VolumeMount[]*VolumeMountPod volumes to mount into the container's filesystem.
Cannot be updated.
+optional
+patchMergeKey=mountPath
+patchStrategy=merge
workingDirstringstringContainer's working directory.
If not specified, the container runtime's default will be used, which
might be configured in the container image.
Cannot be updated.
+optional
-

SeccompProfile

-
-

Only one profile source may be set. -+union

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
localhostProfilestringstringlocalhostProfile indicates a profile defined in a file on the node should be used.
The profile must be preconfigured on the node to work.
Must be a descending path, relative to the kubelet's configured seccomp profile location.
Must only be set if type is "Localhost".
+optional
typeSeccompProfileTypeSeccompProfileType
-

SeccompProfileType

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
SeccompProfileTypestringstring+enum
-

SecretEnvSource

-
-

The contents of the target Secret's Data field will represent the -key-value pairs as environment variables.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
namestringstringName of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?
+optional
optionalbooleanboolSpecify whether the Secret must be defined
+optional
-

SecretKeySelector

-
-

+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
keystringstringThe key of the secret to select from. Must be a valid secret key.
namestringstringName of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?
+optional
optionalbooleanboolSpecify whether the Secret or its key must be defined
+optional
-

SecretProjection

-
-

The contents of the target Secret's Data field will be presented in a -projected volume as files using the keys in the Data field as the file names. -Note that this is identical to a secret volume source without the default -mode.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
items[]KeyToPath[]*KeyToPathitems if unspecified, each key-value pair in the Data field of the referenced
Secret will be projected into the volume as a file whose name is the
key and content is the value. If specified, the listed keys will be
projected into the specified paths, and unlisted keys will not be
present. If a key is specified which is not present in the Secret,
the volume setup will error unless it is marked optional. Paths must be
relative and may not contain the '..' path or start with '..'.
+optional
namestringstringName of the referent.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
TODO: Add other useful fields. apiVersion, kind, uid?
+optional
optionalbooleanbooloptional field specify whether the Secret or its key must be defined
+optional
-

SecretVolumeSource

-
-

The contents of the target Secret's Data field will be presented in a volume -as files using the keys in the Data field as the file names. -Secret volumes support ownership management and SELinux relabeling.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
defaultModeint32 (formatted integer)int32defaultMode is Optional: mode bits used to set permissions on created files by default.
Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511.
YAML accepts both octal and decimal values, JSON requires decimal values
for mode bits. Defaults to 0644.
Directories within the path are not affected by this setting.
This might be in conflict with other options that affect the file
mode, like fsGroup, and the result can be other mode bits set.
+optional
items[]KeyToPath[]*KeyToPathitems If unspecified, each key-value pair in the Data field of the referenced
Secret will be projected into the volume as a file whose name is the
key and content is the value. If specified, the listed keys will be
projected into the specified paths, and unlisted keys will not be
present. If a key is specified which is not present in the Secret,
the volume setup will error unless it is marked optional. Paths must be
relative and may not contain the '..' path or start with '..'.
+optional
optionalbooleanbooloptional field specify whether the Secret or its keys must be defined
+optional
secretNamestringstringsecretName is the name of the secret in the pod's namespace to use.
More info: https://kubernetes.io/docs/concepts/storage/volumes#secret
+optional
-

SecurityContext

-
-

Some fields are present in both SecurityContext and PodSecurityContext. When both -are set, the values in SecurityContext take precedence.

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
allowPrivilegeEscalationbooleanboolAllowPrivilegeEscalation controls whether a process can gain more
privileges than its parent process. This bool directly controls if
the no_new_privs flag will be set on the container process.
AllowPrivilegeEscalation is true always when the container is:
1) run as Privileged
2) has CAP_SYS_ADMIN
Note that this field cannot be set when spec.os.name is windows.
+optional
capabilitiesCapabilitiesCapabilities
privilegedbooleanboolRun container in privileged mode.
Processes in privileged containers are essentially equivalent to root on the host.
Defaults to false.
Note that this field cannot be set when spec.os.name is windows.
+optional
procMountProcMountTypeProcMountType
readOnlyRootFilesystembooleanboolWhether this container has a read-only root filesystem.
Default is false.
Note that this field cannot be set when spec.os.name is windows.
+optional
runAsGroupint64 (formatted integer)int64The GID to run the entrypoint of the container process.
Uses runtime default if unset.
May also be set in PodSecurityContext. If set in both SecurityContext and
PodSecurityContext, the value specified in SecurityContext takes precedence.
Note that this field cannot be set when spec.os.name is windows.
+optional
runAsNonRootbooleanboolIndicates that the container must run as a non-root user.
If true, the Kubelet will validate the image at runtime to ensure that it
does not run as UID 0 (root) and fail to start the container if it does.
If unset or false, no such validation will be performed.
May also be set in PodSecurityContext. If set in both SecurityContext and
PodSecurityContext, the value specified in SecurityContext takes precedence.
+optional
runAsUserint64 (formatted integer)int64The UID to run the entrypoint of the container process.
Defaults to user specified in image metadata if unspecified.
May also be set in PodSecurityContext. If set in both SecurityContext and
PodSecurityContext, the value specified in SecurityContext takes precedence.
Note that this field cannot be set when spec.os.name is windows.
+optional
seLinuxOptionsSELinuxOptionsSELinuxOptions
seccompProfileSeccompProfileSeccompProfile
windowsOptionsWindowsSecurityContextOptionsWindowsSecurityContextOptions
-

SemaphoreRef

-
-

SemaphoreRef is a reference of Semaphore

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
configMapKeyRefConfigMapKeySelectorConfigMapKeySelector
namespacestringstring"[namespace of workflow]"
-

Sequence

-
-

Sequence expands a workflow step into numeric range

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
countIntOrStringIntOrString
endIntOrStringIntOrString
formatstringstringFormat is a printf format string to format the value in the sequence
startIntOrStringIntOrString
-

ServiceAccountTokenProjection

-
-

ServiceAccountTokenProjection represents a projected service account token -volume. This projection can be used to insert a service account token into -the pods runtime filesystem for use against APIs (Kubernetes API Server or -otherwise).

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
audiencestringstringaudience is the intended audience of the token. A recipient of a token
must identify itself with an identifier specified in the audience of the
token, and otherwise should reject the token. The audience defaults to the
identifier of the apiserver.
+optional
expirationSecondsint64 (formatted integer)int64expirationSeconds is the requested duration of validity of the service
account token. As the token approaches expiration, the kubelet volume
plugin will proactively rotate the service account token. The kubelet will
start trying to rotate the token if the token is older than 80 percent of
its time to live or if the token is older than 24 hours.Defaults to 1 hour
and must be at least 10 minutes.
+optional
pathstringstringpath is the path relative to the mount point of the file to project the
token into.
-

StorageMedium

- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
StorageMediumstringstring
-

StorageOSVolumeSource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfsType is the filesystem type to mount.
Must be a filesystem type supported by the host operating system.
Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
+optional
readOnlybooleanboolreadOnly defaults to false (read/write). ReadOnly here will force
the ReadOnly setting in VolumeMounts.
+optional
secretRefLocalObjectReferenceLocalObjectReference
volumeNamestringstringvolumeName is the human-readable name of the StorageOS volume. Volume
names are only unique within a namespace.
volumeNamespacestringstringvolumeNamespace specifies the scope of the volume within StorageOS. If no
namespace is specified then the Pod's namespace will be used. This allows the
Kubernetes name scoping to be mirrored within StorageOS for tighter integration.
Set VolumeName to any name to override the default behaviour.
Set to "default" if you are not using namespaces within StorageOS.
Namespaces that do not pre-exist within StorageOS will be created.
+optional
-

SuppliedValueFrom

-

interface{}

-

SuspendTemplate

-
-

SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
durationstringstringDuration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds.
Could also be a Duration, e.g.: "2m", "6h"
-

Synchronization

-
-

Synchronization holds synchronization lock configuration

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
mutexMutexMutex
semaphoreSemaphoreRefSemaphoreRef
-

Sysctl

-
-

Sysctl defines a kernel parameter to be set

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
namestringstringName of a property to set
valuestringstringValue of a property to set
-

TCPSocketAction

-
-

TCPSocketAction describes an action based on opening a socket

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
hoststringstringOptional: Host name to connect to, defaults to the pod IP.
+optional
portIntOrStringIntOrString
-

TaintEffect

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
TaintEffectstringstring+enum
-

TarStrategy

-
-

TarStrategy will tar and gzip the file or directory when saving

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
compressionLevelint32 (formatted integer)int32CompressionLevel specifies the gzip compression level to use for the artifact.
Defaults to gzip.DefaultCompression.
-

Template

-
-

Template is a reusable and composable unit of execution in a workflow

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
activeDeadlineSecondsIntOrStringIntOrString
affinityAffinityAffinity
archiveLocationArtifactLocationArtifactLocation
automountServiceAccountTokenbooleanboolAutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods.
ServiceAccountName of ExecutorConfig must be specified if this value is false.
containerContainerContainer
containerSetContainerSetTemplateContainerSetTemplate
daemonbooleanboolDaemon will allow a workflow to proceed to the next step so long as the container reaches readiness
dagDAGTemplateDAGTemplate
dataDataData
executorExecutorConfigExecutorConfig
failFastbooleanboolFailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this
template is expanded with withItems, etc.
hostAliases[]HostAlias[]*HostAliasHostAliases is an optional list of hosts and IPs that will be injected into the pod spec
+patchStrategy=merge
+patchMergeKey=ip
httpHTTPHTTP
initContainers[]UserContainer[]*UserContainerInitContainers is a list of containers which run before the main container.
+patchStrategy=merge
+patchMergeKey=name
inputsInputsInputs
memoizeMemoizeMemoize
metadataMetadataMetadata
metricsMetricsMetrics
namestringstringName is the name of the template
nodeSelectormap of stringmap[string]stringNodeSelector is a selector to schedule this step of the workflow to be
run on the selected node(s). Overrides the selector set at the workflow level.
outputsOutputsOutputs
parallelismint64 (formatted integer)int64Parallelism limits the max total parallel pods that can execute at the same time within the
boundaries of this template invocation. If additional steps/dag templates are invoked, the
pods created by those templates will not be counted towards this total.
pluginPluginPlugin
podSpecPatchstringstringPodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of
container fields which are not strings (e.g. resource limits).
priorityint32 (formatted integer)int32Priority to apply to workflow pods.
priorityClassNamestringstringPriorityClassName to apply to workflow pods.
resourceResourceTemplateResourceTemplate
retryStrategyRetryStrategyRetryStrategy
schedulerNamestringstringIf specified, the pod will be dispatched by specified scheduler.
Or it will be dispatched by workflow scope scheduler if specified.
If neither specified, the pod will be dispatched by default scheduler.
+optional
scriptScriptTemplateScriptTemplate
securityContextPodSecurityContextPodSecurityContext
serviceAccountNamestringstringServiceAccountName to apply to workflow pods
sidecars[]UserContainer[]*UserContainerSidecars is a list of containers which run alongside the main container
Sidecars are automatically killed when the main container completes
+patchStrategy=merge
+patchMergeKey=name
steps[]ParallelSteps[]ParallelStepsSteps define a series of sequential/parallel workflow steps
suspendSuspendTemplateSuspendTemplate
synchronizationSynchronizationSynchronization
timeoutstringstringTimeout allows to set the total node execution timeout duration counting from the node's start time.
This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates.
tolerations[]Toleration[]*TolerationTolerations to apply to workflow pods.
+patchStrategy=merge
+patchMergeKey=key
volumes[]Volume[]*VolumeVolumes is a list of volumes that can be mounted by containers in a template.
+patchStrategy=merge
+patchMergeKey=name
-

TemplateRef

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
clusterScopebooleanboolClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate).
namestringstringName is the resource name of the template.
templatestringstringTemplate is the name of referred template in the resource.
-

TerminationMessagePolicy

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
TerminationMessagePolicystringstring+enum
-

Time

-
-

+protobuf.options.marshal=false -+protobuf.as=Timestamp -+protobuf.options.(gogoproto.goproto_stringer)=false

-
-

interface{}

-

Toleration

-
-

The pod this Toleration is attached to tolerates any taint that matches -the triple using the matching operator .

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
effectTaintEffectTaintEffect
keystringstringKey is the taint key that the toleration applies to. Empty means match all taint keys.
If the key is empty, operator must be Exists; this combination means to match all values and all keys.
+optional
operatorTolerationOperatorTolerationOperator
tolerationSecondsint64 (formatted integer)int64TolerationSeconds represents the period of time the toleration (which must be
of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default,
it is not set, which means tolerate the taint forever (do not evict). Zero and
negative values will be treated as 0 (evict immediately) by the system.
+optional
valuestringstringValue is the taint value the toleration matches to.
If the operator is Exists, the value should be empty, otherwise just a regular string.
+optional
-

TolerationOperator

-
-

+enum

-
- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
TolerationOperatorstringstring+enum
-

Transformation

-

[]TransformationStep

-

TransformationStep

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
expressionstringstringExpression defines an expr expression to apply
-

Type

- - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
Typeint64 (formatted integer)int64
-

TypedLocalObjectReference

-
-

TypedLocalObjectReference contains enough information to let you locate the -typed referenced object inside the same namespace. -+structType=atomic

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
apiGroupstringstringAPIGroup is the group for the resource being referenced.
If APIGroup is not specified, the specified Kind must be in the core API group.
For any other third-party types, APIGroup is required.
+optional
kindstringstringKind is the type of resource being referenced
namestringstringName is the name of resource being referenced
-

UID

-
-

UID is a type that holds unique ID values, including UUIDs. Because we -don't ONLY use UUIDs, this is an alias to string. Being a type captures -intent and helps make sure that UIDs and names do not get conflated.

-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
UIDstringstringUID is a type that holds unique ID values, including UUIDs. Because we
don't ONLY use UUIDs, this is an alias to string. Being a type captures
intent and helps make sure that UIDs and names do not get conflated.
-

URIScheme

-
-

URIScheme identifies the scheme used for connection to a host for Get actions -+enum

-
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeDefaultDescriptionExample
URISchemestringstringURIScheme identifies the scheme used for connection to a host for Get actions
+enum
-

UserContainer

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
args[]string[]stringArguments to the entrypoint.
The container image's CMD is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
+optional
command[]string[]stringEntrypoint array. Not executed within a shell.
The container image's ENTRYPOINT is used if this is not provided.
Variable references $(VAR_NAME) are expanded using the container's environment. If a variable
cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced
to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will
produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless
of whether the variable exists or not. Cannot be updated.
More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
+optional
env[]EnvVar[]*EnvVarList of environment variables to set in the container.
Cannot be updated.
+optional
+patchMergeKey=name
+patchStrategy=merge
envFrom[]EnvFromSource[]*EnvFromSourceList of sources to populate environment variables in the container.
The keys defined within a source must be a C_IDENTIFIER. All invalid keys
will be reported as an event when the container is starting. When a key exists in multiple
sources, the value associated with the last source will take precedence.
Values defined by an Env with a duplicate key will take precedence.
Cannot be updated.
+optional
imagestringstringContainer image name.
More info: https://kubernetes.io/docs/concepts/containers/images
This field is optional to allow higher level config management to default or override
container images in workload controllers like Deployments and StatefulSets.
+optional
imagePullPolicyPullPolicyPullPolicy
lifecycleLifecycleLifecycle
livenessProbeProbeProbe
mirrorVolumeMountsbooleanboolMirrorVolumeMounts will mount the same volumes specified in the main container
to the container (including artifacts), at the same mountPaths. This enables
dind daemon to partially see the same filesystem as the main container in
order to use features such as docker volume binding
namestringstringName of the container specified as a DNS_LABEL.
Each container in a pod must have a unique name (DNS_LABEL).
Cannot be updated.
ports[]ContainerPort[]*ContainerPortList of ports to expose from the container. Exposing a port here gives
the system additional information about the network connections a
container uses, but is primarily informational. Not specifying a port here
DOES NOT prevent that port from being exposed. Any port which is
listening on the default "0.0.0.0" address inside a container will be
accessible from the network.
Cannot be updated.
+optional
+patchMergeKey=containerPort
+patchStrategy=merge
+listType=map
+listMapKey=containerPort
+listMapKey=protocol
readinessProbeProbeProbe
resourcesResourceRequirementsResourceRequirements
securityContextSecurityContextSecurityContext
startupProbeProbeProbe
stdinbooleanboolWhether this container should allocate a buffer for stdin in the container runtime. If this
is not set, reads from stdin in the container will always result in EOF.
Default is false.
+optional
stdinOncebooleanboolWhether the container runtime should close the stdin channel after it has been opened by
a single attach. When stdin is true the stdin stream will remain open across multiple attach
sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the
first client attaches to stdin, and then remains open and accepts data until the client disconnects,
at which time stdin is closed and remains closed until the container is restarted. If this
flag is false, a container processes that reads from stdin will never receive an EOF.
Default is false
+optional
terminationMessagePathstringstringOptional: Path at which the file to which the container's termination message
will be written is mounted into the container's filesystem.
Message written is intended to be brief final status, such as an assertion failure message.
Will be truncated by the node if greater than 4096 bytes. The total message length across
all containers will be limited to 12kb.
Defaults to /dev/termination-log.
Cannot be updated.
+optional
terminationMessagePolicyTerminationMessagePolicyTerminationMessagePolicy
ttybooleanboolWhether this container should allocate a TTY for itself, also requires 'stdin' to be true.
Default is false.
+optional
volumeDevices[]VolumeDevice[]*VolumeDevicevolumeDevices is the list of block devices to be used by the container.
+patchMergeKey=devicePath
+patchStrategy=merge
+optional
volumeMounts[]VolumeMount[]*VolumeMountPod volumes to mount into the container's filesystem.
Cannot be updated.
+optional
+patchMergeKey=mountPath
+patchStrategy=merge
workingDirstringstringContainer's working directory.
If not specified, the container runtime's default will be used, which
might be configured in the container image.
Cannot be updated.
+optional
-

ValueFrom

-
-

ValueFrom describes a location in which to obtain the value to a parameter

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
configMapKeyRefConfigMapKeySelectorConfigMapKeySelector
defaultAnyStringAnyString
eventstringstringSelector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message
expressionstringstringExpression, if defined, is evaluated to specify the value for the parameter
jqFilterstringstringJQFilter expression against the resource object in resource templates
jsonPathstringstringJSONPath of a resource to retrieve an output parameter value from in resource templates
parameterstringstringParameter reference to a step or dag task in which to retrieve an output parameter value from
(e.g. '{{steps.mystep.outputs.myparam}}')
pathstringstringPath in the container to retrieve an output parameter value from in container templates
suppliedSuppliedValueFromSuppliedValueFrom
-

Volume

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
awsElasticBlockStoreAWSElasticBlockStoreVolumeSourceAWSElasticBlockStoreVolumeSource
azureDiskAzureDiskVolumeSourceAzureDiskVolumeSource
azureFileAzureFileVolumeSourceAzureFileVolumeSource
cephfsCephFSVolumeSourceCephFSVolumeSource
cinderCinderVolumeSourceCinderVolumeSource
configMapConfigMapVolumeSourceConfigMapVolumeSource
csiCSIVolumeSourceCSIVolumeSource
downwardAPIDownwardAPIVolumeSourceDownwardAPIVolumeSource
emptyDirEmptyDirVolumeSourceEmptyDirVolumeSource
ephemeralEphemeralVolumeSourceEphemeralVolumeSource
fcFCVolumeSourceFCVolumeSource
flexVolumeFlexVolumeSourceFlexVolumeSource
flockerFlockerVolumeSourceFlockerVolumeSource
gcePersistentDiskGCEPersistentDiskVolumeSourceGCEPersistentDiskVolumeSource
gitRepoGitRepoVolumeSourceGitRepoVolumeSource
glusterfsGlusterfsVolumeSourceGlusterfsVolumeSource
hostPathHostPathVolumeSourceHostPathVolumeSource
iscsiISCSIVolumeSourceISCSIVolumeSource
namestringstringname of the volume.
Must be a DNS_LABEL and unique within the pod.
More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
nfsNFSVolumeSourceNFSVolumeSource
persistentVolumeClaimPersistentVolumeClaimVolumeSourcePersistentVolumeClaimVolumeSource
photonPersistentDiskPhotonPersistentDiskVolumeSourcePhotonPersistentDiskVolumeSource
portworxVolumePortworxVolumeSourcePortworxVolumeSource
projectedProjectedVolumeSourceProjectedVolumeSource
quobyteQuobyteVolumeSourceQuobyteVolumeSource
rbdRBDVolumeSourceRBDVolumeSource
scaleIOScaleIOVolumeSourceScaleIOVolumeSource
secretSecretVolumeSourceSecretVolumeSource
storageosStorageOSVolumeSourceStorageOSVolumeSource
vsphereVolumeVsphereVirtualDiskVolumeSourceVsphereVirtualDiskVolumeSource
-

VolumeDevice

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
devicePathstringstringdevicePath is the path inside of the container that the device will be mapped to.
namestringstringname must match the name of a persistentVolumeClaim in the pod
-

VolumeMount

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
mountPathstringstringPath within the container at which the volume should be mounted. Must
not contain ':'.
mountPropagationMountPropagationModeMountPropagationMode
namestringstringThis must match the Name of a Volume.
readOnlybooleanboolMounted read-only if true, read-write otherwise (false or unspecified).
Defaults to false.
+optional
subPathstringstringPath within the volume from which the container's volume should be mounted.
Defaults to "" (volume's root).
+optional
subPathExprstringstringExpanded path within the volume from which the container's volume should be mounted.
Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment.
Defaults to "" (volume's root).
SubPathExpr and SubPath are mutually exclusive.
+optional
-

VolumeProjection

-
-

Projection that may be projected along with other supported volume types

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
configMapConfigMapProjectionConfigMapProjection
downwardAPIDownwardAPIProjectionDownwardAPIProjection
secretSecretProjectionSecretProjection
serviceAccountTokenServiceAccountTokenProjectionServiceAccountTokenProjection
-

VsphereVirtualDiskVolumeSource

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
fsTypestringstringfsType is filesystem type to mount.
Must be a filesystem type supported by the host operating system.
Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
+optional
storagePolicyIDstringstringstoragePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.
+optional
storagePolicyNamestringstringstoragePolicyName is the storage Policy Based Management (SPBM) profile name.
+optional
volumePathstringstringvolumePath is the path that identifies vSphere volume vmdk
-

WeightedPodAffinityTerm

-
-

The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)

-
-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
podAffinityTermPodAffinityTermPodAffinityTerm
weightint32 (formatted integer)int32weight associated with matching the corresponding podAffinityTerm,
in the range 1-100.
-

WindowsSecurityContextOptions

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
gmsaCredentialSpecstringstringGMSACredentialSpec is where the GMSA admission webhook
(https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the
GMSA credential spec named by the GMSACredentialSpecName field.
+optional
gmsaCredentialSpecNamestringstringGMSACredentialSpecName is the name of the GMSA credential spec to use.
+optional
hostProcessbooleanboolHostProcess determines if a container should be run as a 'Host Process' container.
This field is alpha-level and will only be honored by components that enable the
WindowsHostProcessContainers feature flag. Setting this field without the feature
flag will result in errors when validating the Pod. All of a Pod's containers must
have the same effective HostProcess value (it is not allowed to have a mix of HostProcess
containers and non-HostProcess containers). In addition, if HostProcess is true
then HostNetwork must also be set to true.
+optional
runAsUserNamestringstringThe UserName in Windows to run the entrypoint of the container process.
Defaults to the user specified in image metadata if unspecified.
May also be set in PodSecurityContext. If set in both SecurityContext and
PodSecurityContext, the value specified in SecurityContext takes precedence.
+optional
-

Workflow

-

Properties

- - - - - - - - - - - - - - - - - - - - - - - -
NameTypeGo typeRequiredDefaultDescriptionExample
metadataObjectMetaObjectMeta
-

ZipStrategy

-
-

ZipStrategy will unzip zipped input artifacts

-
-

interface{}

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/executor_swagger/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/faq/index.html b/faq/index.html index dba47dfccbc6..3472a7309196 100644 --- a/faq/index.html +++ b/faq/index.html @@ -1,4029 +1,11 @@ - - - + - - - - - - - - - - - - FAQ - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + FAQ - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

FAQ

-

"token not valid", "any bearer token is able to login in the UI or use the API"

-

You may not have configured Argo Server authentication correctly.

-

If you want SSO, try running with --auth-mode=sso. -If you're using --auth-mode=client, make sure you have Bearer in front of the ServiceAccount Secret, as mentioned in Access Token.

-

Learn more about the Argo Server set-up

-

Argo Server return EOF error

-

Since v3.0 the Argo Server listens for HTTPS requests, rather than HTTP. Try changing your URL to HTTPS, or start Argo Server using --secure=false.

-

My workflow hangs

-

Check your wait container logs:

-

Is there an RBAC error?

-

Learn more about workflow RBAC

-

Return "unknown (get pods)" error

-

You're probably getting a permission denied error because your RBAC is not configured.

-

Learn more about workflow RBAC and even more details

-

There is an error about /var/run/docker.sock

-

Try using a different container runtime executor.

-

Learn more about executors

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/faq/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/fields/index.html b/fields/index.html index 76d29e60a4b3..a1e93a7d73b4 100644 --- a/fields/index.html +++ b/fields/index.html @@ -1,20605 +1,11 @@ - - - + - - - - - - - - - - - - Field Reference - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Field Reference - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Field Reference

-

Workflow

-

Workflow is the definition of a workflow resource

-
-Examples (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
apiVersionstringAPIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources
kindstringKind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadataObjectMetaNo description available
specWorkflowSpecNo description available
statusWorkflowStatusNo description available
-

CronWorkflow

-

CronWorkflow is the definition of a scheduled workflow resource

-
-Examples (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
apiVersionstringAPIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources
kindstringKind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadataObjectMetaNo description available
specCronWorkflowSpecNo description available
statusCronWorkflowStatusNo description available
-

WorkflowTemplate

-

WorkflowTemplate is the definition of a workflow template resource

-
-Examples (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
apiVersionstringAPIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#resources
kindstringKind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.io.k8s.community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadataObjectMetaNo description available
specWorkflowSpecNo description available
-

WorkflowSpec

-

WorkflowSpec is the specification of a Workflow.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
activeDeadlineSecondsintegerOptional duration in seconds relative to the workflow start time which the workflow is allowed to run before the controller terminates the io.argoproj.workflow.v1alpha1. A value of zero is used to terminate a Running workflow
affinityAffinityAffinity sets the scheduling constraints for all pods in the io.argoproj.workflow.v1alpha1. Can be overridden by an affinity specified in the template
archiveLogsbooleanArchiveLogs indicates if the container logs should be archived
argumentsArgumentsArguments contain the parameters and artifacts sent to the workflow entrypoint Parameters are referencable globally using the 'workflow' variable prefix. e.g. {{io.argoproj.workflow.v1alpha1.parameters.myparam}}
artifactGCWorkflowLevelArtifactGCArtifactGC describes the strategy to use when deleting artifacts from completed or deleted workflows (applies to all output Artifacts unless Artifact.ArtifactGC is specified, which overrides this)
artifactRepositoryRefArtifactRepositoryRefArtifactRepositoryRef specifies the configMap name and key containing the artifact repository config.
automountServiceAccountTokenbooleanAutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false.
dnsConfigPodDNSConfigPodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy.
dnsPolicystringSet DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'.
entrypointstringEntrypoint is a template reference to the starting point of the io.argoproj.workflow.v1alpha1.
executorExecutorConfigExecutor holds configurations of executor containers of the io.argoproj.workflow.v1alpha1.
hooksLifecycleHookHooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step
hostAliasesArray<HostAlias>No description available
hostNetworkbooleanHost networking requested for this workflow pod. Default to false.
imagePullSecretsArray<LocalObjectReference>ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
metricsMetricsMetrics are a list of metrics emitted from this Workflow
nodeSelectorMap< string , string >NodeSelector is a selector which will result in all pods of the workflow to be scheduled on the selected node(s). This is able to be overridden by a nodeSelector specified in the template.
onExitstringOnExit is a template reference which is invoked at the end of the workflow, irrespective of the success, failure, or error of the primary io.argoproj.workflow.v1alpha1.
parallelismintegerParallelism limits the max total parallel pods that can execute at the same time in a workflow
podDisruptionBudgetPodDisruptionBudgetSpecPodDisruptionBudget holds the number of concurrent disruptions that you allow for Workflow's Pods. Controller will automatically add the selector with workflow name, if selector is empty. Optional: Defaults to empty.
podGCPodGCPodGC describes the strategy to use when deleting completed pods
podMetadataMetadataPodMetadata defines additional metadata that should be applied to workflow pods
~~podPriority~~~~integer~~~~Priority to apply to workflow pods.~~ DEPRECATED: Use PodPriorityClassName instead.
podPriorityClassNamestringPriorityClassName to apply to workflow pods.
podSpecPatchstringPodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits).
priorityintegerPriority is used if controller is configured to process limited number of workflows in parallel. Workflows with higher priority are processed first.
retryStrategyRetryStrategyRetryStrategy for all templates in the io.argoproj.workflow.v1alpha1.
schedulerNamestringSet scheduler name for all pods. Will be overridden if container/script template's scheduler name is set. Default scheduler will be used if neither specified.
securityContextPodSecurityContextSecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field.
serviceAccountNamestringServiceAccountName is the name of the ServiceAccount to run all pods of the workflow as.
shutdownstringShutdown will shutdown the workflow according to its ShutdownStrategy
suspendbooleanSuspend will suspend the workflow and prevent execution of any future steps in the workflow
synchronizationSynchronizationSynchronization holds synchronization lock configuration for this Workflow
templateDefaultsTemplateTemplateDefaults holds default template values that will apply to all templates in the Workflow, unless overridden on the template-level
templatesArray<Template>Templates is a list of workflow templates used in a workflow
tolerationsArray<Toleration>Tolerations to apply to workflow pods.
ttlStrategyTTLStrategyTTLStrategy limits the lifetime of a Workflow that has finished execution depending on if it Succeeded or Failed. If this struct is set, once the Workflow finishes, it will be deleted after the time to live expires. If this field is unset, the controller config map will hold the default values.
volumeClaimGCVolumeClaimGCVolumeClaimGC describes the strategy to use when deleting volumes from completed workflows
volumeClaimTemplatesArray<PersistentVolumeClaim>VolumeClaimTemplates is a list of claims that containers are allowed to reference. The Workflow controller will create the claims at the beginning of the workflow and delete the claims upon completion of the workflow
volumesArray<Volume>Volumes is a list of volumes that can be mounted by containers in a io.argoproj.workflow.v1alpha1.
workflowMetadataWorkflowMetadataWorkflowMetadata contains some metadata of the workflow to refer to
workflowTemplateRefWorkflowTemplateRefWorkflowTemplateRef holds a reference to a WorkflowTemplate for execution
-

WorkflowStatus

-

WorkflowStatus contains overall status information about a workflow

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
artifactGCStatusArtGCStatusArtifactGCStatus maintains the status of Artifact Garbage Collection
artifactRepositoryRefArtifactRepositoryRefStatusArtifactRepositoryRef is used to cache the repository to use so we do not need to determine it everytime we reconcile.
compressedNodesstringCompressed and base64 decoded Nodes map
conditionsArray<Condition>Conditions is a list of conditions the Workflow may have
estimatedDurationintegerEstimatedDuration in seconds.
finishedAtTimeTime at which this workflow completed
messagestringA human readable message indicating details about why the workflow is in this condition.
nodesNodeStatusNodes is a mapping between a node ID and the node's status.
offloadNodeStatusVersionstringWhether on not node status has been offloaded to a database. If exists, then Nodes and CompressedNodes will be empty. This will actually be populated with a hash of the offloaded data.
outputsOutputsOutputs captures output values and artifact locations produced by the workflow via global outputs
persistentVolumeClaimsArray<Volume>PersistentVolumeClaims tracks all PVCs that were created as part of the io.argoproj.workflow.v1alpha1. The contents of this list are drained at the end of the workflow.
phasestringPhase a simple, high-level summary of where the workflow is in its lifecycle. Will be "" (Unknown), "Pending", or "Running" before the workflow is completed, and "Succeeded", "Failed" or "Error" once the workflow has completed.
progressstringProgress to completion
resourcesDurationMap< integer , int64 >ResourcesDuration is the total for the workflow
startedAtTimeTime at which this workflow started
storedTemplatesTemplateStoredTemplates is a mapping between a template ref and the node's status.
storedWorkflowTemplateSpecWorkflowSpecStoredWorkflowSpec stores the WorkflowTemplate spec for future execution.
synchronizationSynchronizationStatusSynchronization stores the status of synchronization locks
taskResultsCompletedMap< boolean , string >Have task results been completed? (mapped by Pod name) used to prevent premature garbage collection of artifacts.
-

CronWorkflowSpec

-

CronWorkflowSpec is the specification of a CronWorkflow

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
concurrencyPolicystringConcurrencyPolicy is the K8s-style concurrency policy that will be used
failedJobsHistoryLimitintegerFailedJobsHistoryLimit is the number of failed jobs to be kept at a time
schedulestringSchedule is a schedule to run the Workflow in Cron format
startingDeadlineSecondsintegerStartingDeadlineSeconds is the K8s-style deadline that will limit the time a CronWorkflow will be run after its original scheduled time if it is missed.
successfulJobsHistoryLimitintegerSuccessfulJobsHistoryLimit is the number of successful jobs to be kept at a time
suspendbooleanSuspend is a flag that will stop new CronWorkflows from running if set to true
timezonestringTimezone is the timezone against which the cron schedule will be calculated, e.g. "Asia/Tokyo". Default is machine's local time.
workflowMetadataObjectMetaWorkflowMetadata contains some metadata of the workflow to be run
workflowSpecWorkflowSpecWorkflowSpec is the spec of the workflow to be run
-

CronWorkflowStatus

-

CronWorkflowStatus is the status of a CronWorkflow

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
activeArray<ObjectReference>Active is a list of active workflows stemming from this CronWorkflow
conditionsArray<Condition>Conditions is a list of conditions the CronWorkflow may have
lastScheduledTimeTimeLastScheduleTime is the last time the CronWorkflow was scheduled
-

Arguments

-

Arguments to a template

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
artifactsArray<Artifact>Artifacts is the list of artifacts to pass to the template or workflow
parametersArray<Parameter>Parameters is the list of parameters to pass to the template or workflow
-

WorkflowLevelArtifactGC

-

WorkflowLevelArtifactGC describes how to delete artifacts from completed Workflows - this spec is used on the Workflow level

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
forceFinalizerRemovalbooleanForceFinalizerRemoval: if set to true, the finalizer will be removed in the case that Artifact GC fails
podMetadataMetadataPodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion
podSpecPatchstringPodSpecPatch holds strategic merge patch to apply against the artgc pod spec.
serviceAccountNamestringServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion
strategystringStrategy is the strategy to use.
-

ArtifactRepositoryRef

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
configMapstringThe name of the config map. Defaults to "artifact-repositories".
keystringThe config map key. Defaults to the value of the "workflows.argoproj.io/default-artifact-repository" annotation.
-

ExecutorConfig

-

ExecutorConfig holds configurations of an executor container.

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
serviceAccountNamestringServiceAccountName specifies the service account name of the executor container.
-

LifecycleHook

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
argumentsArgumentsArguments hold arguments to the template
expressionstringExpression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored
templatestringTemplate is the name of the template to execute by the hook
templateRefTemplateRefTemplateRef is the reference to the template resource to execute by the hook
-

Metrics

-

Metrics are a list of metrics emitted from a Workflow/Template

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
prometheusArray<Prometheus>Prometheus is a list of prometheus metrics to be emitted
-

PodGC

-

PodGC describes how to delete completed pods as they complete

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
deleteDelayDurationDurationDeleteDelayDuration specifies the duration before pods in the GC queue get deleted.
labelSelectorLabelSelectorLabelSelector is the label selector to check if the pods match the labels before being added to the pod GC queue.
strategystringStrategy is the strategy to use. One of "OnPodCompletion", "OnPodSuccess", "OnWorkflowCompletion", "OnWorkflowSuccess". If unset, does not delete Pods
-

Metadata

-

Pod metdata

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
annotationsMap< string , string >No description available
labelsMap< string , string >No description available
-

RetryStrategy

-

RetryStrategy provides controls on how to retry a workflow step

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
affinityRetryAffinityAffinity prevents running workflow's step on the same host
backoffBackoffBackoff is a backoff strategy
expressionstringExpression is a condition expression for when a node will be retried. If it evaluates to false, the node will not be retried and the retry strategy will be ignored
limitIntOrStringLimit is the maximum number of retry attempts when retrying a container. It does not include the original container; the maximum number of total attempts will be limit + 1.
retryPolicystringRetryPolicy is a policy of NodePhase statuses that will be retried
-

Synchronization

-

Synchronization holds synchronization lock configuration

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
mutexMutexMutex holds the Mutex lock details
semaphoreSemaphoreRefSemaphore holds the Semaphore configuration
-

Template

-

Template is a reusable and composable unit of execution in a workflow

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
activeDeadlineSecondsIntOrStringOptional duration in seconds relative to the StartTime that the pod may be active on a node before the system actively tries to terminate the pod; value must be positive integer This field is only applicable to container and script templates.
affinityAffinityAffinity sets the pod's scheduling constraints Overrides the affinity set at the workflow level (if any)
archiveLocationArtifactLocationLocation in which all files related to the step will be stored (logs, artifacts, etc...). Can be overridden by individual items in Outputs. If omitted, will use the default artifact repository location configured in the controller, appended with the / in the key.
automountServiceAccountTokenbooleanAutomountServiceAccountToken indicates whether a service account token should be automatically mounted in pods. ServiceAccountName of ExecutorConfig must be specified if this value is false.
containerContainerContainer is the main container image to run in the pod
containerSetContainerSetTemplateContainerSet groups multiple containers within a single pod.
daemonbooleanDaemon will allow a workflow to proceed to the next step so long as the container reaches readiness
dagDAGTemplateDAG template subtype which runs a DAG
dataDataData is a data template
executorExecutorConfigExecutor holds configurations of the executor container.
failFastbooleanFailFast, if specified, will fail this template if any of its child pods has failed. This is useful for when this template is expanded with withItems, etc.
hostAliasesArray<HostAlias>HostAliases is an optional list of hosts and IPs that will be injected into the pod spec
httpHTTPHTTP makes a HTTP request
initContainersArray<UserContainer>InitContainers is a list of containers which run before the main container.
inputsInputsInputs describe what inputs parameters and artifacts are supplied to this template
memoizeMemoizeMemoize allows templates to use outputs generated from already executed templates
metadataMetadataMetdata sets the pods's metadata, i.e. annotations and labels
metricsMetricsMetrics are a list of metrics emitted from this template
namestringName is the name of the template
nodeSelectorMap< string , string >NodeSelector is a selector to schedule this step of the workflow to be run on the selected node(s). Overrides the selector set at the workflow level.
outputsOutputsOutputs describe the parameters and artifacts that this template produces
parallelismintegerParallelism limits the max total parallel pods that can execute at the same time within the boundaries of this template invocation. If additional steps/dag templates are invoked, the pods created by those templates will not be counted towards this total.
pluginPluginPlugin is a plugin template
podSpecPatchstringPodSpecPatch holds strategic merge patch to apply against the pod spec. Allows parameterization of container fields which are not strings (e.g. resource limits).
priorityintegerPriority to apply to workflow pods.
priorityClassNamestringPriorityClassName to apply to workflow pods.
resourceResourceTemplateResource template subtype which can run k8s resources
retryStrategyRetryStrategyRetryStrategy describes how to retry a template when it fails
schedulerNamestringIf specified, the pod will be dispatched by specified scheduler. Or it will be dispatched by workflow scope scheduler if specified. If neither specified, the pod will be dispatched by default scheduler.
scriptScriptTemplateScript runs a portion of code against an interpreter
securityContextPodSecurityContextSecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field.
serviceAccountNamestringServiceAccountName to apply to workflow pods
sidecarsArray<UserContainer>Sidecars is a list of containers which run alongside the main container Sidecars are automatically killed when the main container completes
stepsArray<Array<WorkflowStep>>Steps define a series of sequential/parallel workflow steps
suspendSuspendTemplateSuspend template subtype which can suspend a workflow when reaching the step
synchronizationSynchronizationSynchronization holds synchronization lock configuration for this template
timeoutstringTimeout allows to set the total node execution timeout duration counting from the node's start time. This duration also includes time in which the node spends in Pending state. This duration may not be applied to Step or DAG templates.
tolerationsArray<Toleration>Tolerations to apply to workflow pods.
volumesArray<Volume>Volumes is a list of volumes that can be mounted by containers in a template.
-

TTLStrategy

-

TTLStrategy is the strategy for the time to live depending on if the workflow succeeded or failed

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
secondsAfterCompletionintegerSecondsAfterCompletion is the number of seconds to live after completion
secondsAfterFailureintegerSecondsAfterFailure is the number of seconds to live after failure
secondsAfterSuccessintegerSecondsAfterSuccess is the number of seconds to live after success
-

VolumeClaimGC

-

VolumeClaimGC describes how to delete volumes from completed Workflows

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
strategystringStrategy is the strategy to use. One of "OnWorkflowCompletion", "OnWorkflowSuccess". Defaults to "OnWorkflowSuccess"
-

WorkflowMetadata

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
annotationsMap< string , string >No description available
labelsMap< string , string >No description available
labelsFromLabelValueFromNo description available
-

WorkflowTemplateRef

-

WorkflowTemplateRef is a reference to a WorkflowTemplate resource.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
clusterScopebooleanClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate).
namestringName is the resource name of the workflow template.
-

ArtGCStatus

-

ArtGCStatus maintains state related to ArtifactGC

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
notSpecifiedbooleanif this is true, we already checked to see if we need to do it and we don't
podsRecoupedMap< boolean , string >have completed Pods been processed? (mapped by Pod name) used to prevent re-processing the Status of a Pod more than once
strategiesProcessedMap< boolean , string >have Pods been started to perform this strategy? (enables us not to re-process what we've already done)
-

ArtifactRepositoryRefStatus

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
artifactRepositoryArtifactRepositoryThe repository the workflow will use. This maybe empty before v3.1.
configMapstringThe name of the config map. Defaults to "artifact-repositories".
defaultbooleanIf this ref represents the default artifact repository, rather than a config map.
keystringThe config map key. Defaults to the value of the "workflows.argoproj.io/default-artifact-repository" annotation.
namespacestringThe namespace of the config map. Defaults to the workflow's namespace, or the controller's namespace (if found).
-

Condition

-

No description available

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
messagestringMessage is the condition message
statusstringStatus is the status of the condition
typestringType is the type of condition
-

NodeStatus

-

NodeStatus contains status information about an individual node in the workflow

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
boundaryIDstringBoundaryID indicates the node ID of the associated template root node in which this node belongs to
childrenArray< string >Children is a list of child node IDs
daemonedbooleanDaemoned tracks whether or not this node was daemoned and need to be terminated
displayNamestringDisplayName is a human readable representation of the node. Unique within a template boundary
estimatedDurationintegerEstimatedDuration in seconds.
finishedAtTimeTime at which this node completed
hostNodeNamestringHostNodeName name of the Kubernetes node on which the Pod is running, if applicable
idstringID is a unique identifier of a node within the worklow It is implemented as a hash of the node name, which makes the ID deterministic
inputsInputsInputs captures input parameter values and artifact locations supplied to this template invocation
memoizationStatusMemoizationStatusMemoizationStatus holds information about cached nodes
messagestringA human readable message indicating details about why the node is in this condition.
namestringName is unique name in the node tree used to generate the node ID
nodeFlagNodeFlagNodeFlag tracks some history of node. e.g.) hooked, retried, etc.
outboundNodesArray< string >OutboundNodes tracks the node IDs which are considered "outbound" nodes to a template invocation. For every invocation of a template, there are nodes which we considered as "outbound". Essentially, these are last nodes in the execution sequence to run, before the template is considered completed. These nodes are then connected as parents to a following step. In the case of single pod steps (i.e. container, script, resource templates), this list will be nil since the pod itself is already considered the "outbound" node. In the case of DAGs, outbound nodes are the "target" tasks (tasks with no children). In the case of steps, outbound nodes are all the containers involved in the last step group. NOTE: since templates are composable, the list of outbound nodes are carried upwards when a DAG/steps template invokes another DAG/steps template. In other words, the outbound nodes of a template, will be a superset of the outbound nodes of its last children.
outputsOutputsOutputs captures output parameter values and artifact locations produced by this template invocation
phasestringPhase a simple, high-level summary of where the node is in its lifecycle. Can be used as a state machine. Will be one of these values "Pending", "Running" before the node is completed, or "Succeeded", "Skipped", "Failed", "Error", or "Omitted" as a final state.
podIPstringPodIP captures the IP of the pod for daemoned steps
progressstringProgress to completion
resourcesDurationMap< integer , int64 >ResourcesDuration is indicative, but not accurate, resource duration. This is populated when the nodes completes.
startedAtTimeTime at which this node started
synchronizationStatusNodeSynchronizationStatusSynchronizationStatus is the synchronization status of the node
templateNamestringTemplateName is the template name which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup)
templateRefTemplateRefTemplateRef is the reference to the template resource which this node corresponds to. Not applicable to virtual nodes (e.g. Retry, StepGroup)
templateScopestringTemplateScope is the template scope in which the template of this node was retrieved.
typestringType indicates type of node
-

Outputs

-

Outputs hold parameters, artifacts, and results from a step

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
artifactsArray<Artifact>Artifacts holds the list of output artifacts produced by a step
exitCodestringExitCode holds the exit code of a script template
parametersArray<Parameter>Parameters holds the list of output parameters produced by a step
resultstringResult holds the result (stdout) of a script template
-

SynchronizationStatus

-

SynchronizationStatus stores the status of semaphore and mutex.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
mutexMutexStatusMutex stores this workflow's mutex holder details
semaphoreSemaphoreStatusSemaphore stores this workflow's Semaphore holder details
-

Artifact

-

Artifact indicates an artifact to place at a specified path

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
archiveArchiveStrategyArchive controls how the artifact will be saved to the artifact repository.
archiveLogsbooleanArchiveLogs indicates if the container logs should be archived
artifactGCArtifactGCArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows
artifactoryArtifactoryArtifactArtifactory contains artifactory artifact location details
azureAzureArtifactAzure contains Azure Storage artifact location details
deletedbooleanHas this been deleted?
fromstringFrom allows an artifact to reference an artifact from a previous step
fromExpressionstringFromExpression, if defined, is evaluated to specify the value for the artifact
gcsGCSArtifactGCS contains GCS artifact location details
gitGitArtifactGit contains git artifact location details
globalNamestringGlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts
hdfsHDFSArtifactHDFS contains HDFS artifact location details
httpHTTPArtifactHTTP contains HTTP artifact location details
modeintegermode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts.
namestringname of the artifact. must be unique within a template's inputs/outputs.
optionalbooleanMake Artifacts optional, if Artifacts doesn't generate or exist
ossOSSArtifactOSS contains OSS artifact location details
pathstringPath is the container path to the artifact
rawRawArtifactRaw contains raw artifact location details
recurseModebooleanIf mode is set, apply the permission recursively into the artifact if it is a folder
s3S3ArtifactS3 contains S3 artifact location details
subPathstringSubPath allows an artifact to be sourced from a subpath within the specified source
-

Parameter

-

Parameter indicate a passed string parameter to a service template with an optional default value

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
defaultstringDefault is the default value to use for an input parameter if a value was not supplied
descriptionstringDescription is the parameter description
enumArray< string >Enum holds a list of string values to choose from, for the actual value of the parameter
globalNamestringGlobalName exports an output parameter to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.parameters.XXXX}} and in workflow.status.outputs.parameters
namestringName is the parameter name
valuestringValue is the literal value to use for the parameter. If specified in the context of an input parameter, the value takes precedence over any passed values
valueFromValueFromValueFrom is the source for the output parameter's value
-

TemplateRef

-

TemplateRef is a reference of template resource.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
clusterScopebooleanClusterScope indicates the referred template is cluster scoped (i.e. a ClusterWorkflowTemplate).
namestringName is the resource name of the template.
templatestringTemplate is the name of referred template in the resource.
-

Prometheus

-

Prometheus is a prometheus metric to be emitted

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
counterCounterCounter is a counter metric
gaugeGaugeGauge is a gauge metric
helpstringHelp is a string that describes the metric
histogramHistogramHistogram is a histogram metric
labelsArray<MetricLabel>Labels is a list of metric labels
namestringName is the name of the metric
whenstringWhen is a conditional statement that decides when to emit the metric
-

RetryAffinity

-

RetryAffinity prevents running steps on the same host.

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
nodeAntiAffinityRetryNodeAntiAffinityNo description available
-

Backoff

-

Backoff is a backoff strategy to use within retryStrategy

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
durationstringDuration is the amount to back off. Default unit is seconds, but could also be a duration (e.g. "2m", "1h")
factorIntOrStringFactor is a factor to multiply the base duration after each failed retry
maxDurationstringMaxDuration is the maximum amount of time allowed for a workflow in the backoff strategy
-

Mutex

-

Mutex holds Mutex configuration

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringname of the mutex
namespacestringNamespace is the namespace of the mutex, default: [namespace of workflow]
-

SemaphoreRef

-

SemaphoreRef is a reference of Semaphore

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
configMapKeyRefConfigMapKeySelectorConfigMapKeyRef is configmap selector for Semaphore configuration
namespacestringNamespace is the namespace of the configmap, default: [namespace of workflow]
-

ArtifactLocation

-

ArtifactLocation describes a location for a single or multiple artifacts. It is used as single artifact in the context of inputs/outputs (e.g. outputs.artifacts.artname). It is also used to describe the location of multiple artifacts such as the archive location of a single workflow step, which the executor will use as a default location to store its files.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
archiveLogsbooleanArchiveLogs indicates if the container logs should be archived
artifactoryArtifactoryArtifactArtifactory contains artifactory artifact location details
azureAzureArtifactAzure contains Azure Storage artifact location details
gcsGCSArtifactGCS contains GCS artifact location details
gitGitArtifactGit contains git artifact location details
hdfsHDFSArtifactHDFS contains HDFS artifact location details
httpHTTPArtifactHTTP contains HTTP artifact location details
ossOSSArtifactOSS contains OSS artifact location details
rawRawArtifactRaw contains raw artifact location details
s3S3ArtifactS3 contains S3 artifact location details
-

ContainerSetTemplate

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
containersArray<ContainerNode>No description available
retryStrategyContainerSetRetryStrategyRetryStrategy describes how to retry a container nodes in the container set if it fails. Nbr of retries(default 0) and sleep duration between retries(default 0s, instant retry) can be set.
volumeMountsArray<VolumeMount>No description available
-

DAGTemplate

-

DAGTemplate is a template subtype for directed acyclic graph templates

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
failFastbooleanThis flag is for DAG logic. The DAG logic has a built-in "fail fast" feature to stop scheduling new steps, as soon as it detects that one of the DAG nodes is failed. Then it waits until all DAG nodes are completed before failing the DAG itself. The FailFast flag default is true, if set to false, it will allow a DAG to run all branches of the DAG to completion (either success or failure), regardless of the failed outcomes of branches in the DAG. More info and example about this feature at https://github.com/argoproj/argo-workflows/issues/1442
targetstringTarget are one or more names of targets to execute in a DAG
tasksArray<DAGTask>Tasks are a list of DAG tasks
-

Data

-

Data is a data template

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
sourceDataSourceSource sources external data into a data template
transformationArray<TransformationStep>Transformation applies a set of transformations
-

HTTP

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
bodystringBody is content of the HTTP Request
bodyFromHTTPBodySourceBodyFrom is content of the HTTP Request as Bytes
headersArray<HTTPHeader>Headers are an optional list of headers to send with HTTP requests
insecureSkipVerifybooleanInsecureSkipVerify is a bool when if set to true will skip TLS verification for the HTTP client
methodstringMethod is HTTP methods for HTTP Request
successConditionstringSuccessCondition is an expression if evaluated to true is considered successful
timeoutSecondsintegerTimeoutSeconds is request timeout for HTTP Request. Default is 30 seconds
urlstringURL of the HTTP Request
-

UserContainer

-

UserContainer is a container specified by a user.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
argsArray< string >Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
commandArray< string >Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
envArray<EnvVar>List of environment variables to set in the container. Cannot be updated.
envFromArray<EnvFromSource>List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.
imagestringContainer image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.
imagePullPolicystringImage pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
lifecycleLifecycleActions that the management system should take in response to container lifecycle events. Cannot be updated.
livenessProbeProbePeriodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
mirrorVolumeMountsbooleanMirrorVolumeMounts will mount the same volumes specified in the main container to the container (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding
namestringName of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
portsArray<ContainerPort>List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
readinessProbeProbePeriodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
resourcesResourceRequirementsCompute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
securityContextSecurityContextSecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
startupProbeProbeStartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
stdinbooleanWhether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.
stdinOncebooleanWhether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false
terminationMessagePathstringOptional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.
terminationMessagePolicystringIndicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.
ttybooleanWhether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false.
volumeDevicesArray<VolumeDevice>volumeDevices is the list of block devices to be used by the container.
volumeMountsArray<VolumeMount>Pod volumes to mount into the container's filesystem. Cannot be updated.
workingDirstringContainer's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.
-

Inputs

-

Inputs are the mechanism for passing parameters, artifacts, volumes from one template to another

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
artifactsArray<Artifact>Artifact are a list of artifacts passed as inputs
parametersArray<Parameter>Parameters are a list of parameters passed as inputs
-

Memoize

-

Memoization enables caching for the Outputs of the template

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
cacheCacheCache sets and configures the kind of cache
keystringKey is the key to use as the caching key
maxAgestringMaxAge is the maximum age (e.g. "180s", "24h") of an entry that is still considered valid. If an entry is older than the MaxAge, it will be ignored.
-

Plugin

-

Plugin is an Object with exactly one key

-

ResourceTemplate

-

ResourceTemplate is a template subtype to manipulate kubernetes resources

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
actionstringAction is the action to perform to the resource. Must be one of: get, create, apply, delete, replace, patch
failureConditionstringFailureCondition is a label selector expression which describes the conditions of the k8s resource in which the step was considered failed
flagsArray< string >Flags is a set of additional options passed to kubectl before submitting a resource I.e. to disable resource validation: flags: [ "--validate=false" # disable resource validation ]
manifeststringManifest contains the kubernetes manifest
manifestFromManifestFromManifestFrom is the source for a single kubernetes manifest
mergeStrategystringMergeStrategy is the strategy used to merge a patch. It defaults to "strategic" Must be one of: strategic, merge, json
setOwnerReferencebooleanSetOwnerReference sets the reference to the workflow on the OwnerReference of generated resource.
successConditionstringSuccessCondition is a label selector expression which describes the conditions of the k8s resource in which it is acceptable to proceed to the following step
-

ScriptTemplate

-

ScriptTemplate is a template subtype to enable scripting through code steps

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
argsArray< string >Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
commandArray< string >Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
envArray<EnvVar>List of environment variables to set in the container. Cannot be updated.
envFromArray<EnvFromSource>List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.
imagestringContainer image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.
imagePullPolicystringImage pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
lifecycleLifecycleActions that the management system should take in response to container lifecycle events. Cannot be updated.
livenessProbeProbePeriodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
namestringName of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
portsArray<ContainerPort>List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
readinessProbeProbePeriodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
resourcesResourceRequirementsCompute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
securityContextSecurityContextSecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
sourcestringSource contains the source code of the script to execute
startupProbeProbeStartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
stdinbooleanWhether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.
stdinOncebooleanWhether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false
terminationMessagePathstringOptional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.
terminationMessagePolicystringIndicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.
ttybooleanWhether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false.
volumeDevicesArray<VolumeDevice>volumeDevices is the list of block devices to be used by the container.
volumeMountsArray<VolumeMount>Pod volumes to mount into the container's filesystem. Cannot be updated.
workingDirstringContainer's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.
-

WorkflowStep

-

WorkflowStep is a reference to a template to execute in a series of step

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
argumentsArgumentsArguments hold arguments to the template
continueOnContinueOnContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified
hooksLifecycleHookHooks holds the lifecycle hook which is invoked at lifecycle of step, irrespective of the success, failure, or error status of the primary step
inlineTemplateInline is the template. Template must be empty if this is declared (and vice-versa).
namestringName of the step
~~onExit~~~~string~~~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead.
templatestringTemplate is the name of the template to execute as the step
templateRefTemplateRefTemplateRef is the reference to the template resource to execute as the step.
whenstringWhen is an expression in which the step should conditionally execute
withItemsArray<Item>WithItems expands a step into multiple parallel steps from the items in the list
withParamstringWithParam expands a step into multiple parallel steps from the value in the parameter, which is expected to be a JSON list.
withSequenceSequenceWithSequence expands a step into a numeric sequence
-

SuspendTemplate

-

SuspendTemplate is a template subtype to suspend a workflow at a predetermined point in time

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
durationstringDuration is the seconds to wait before automatically resuming a template. Must be a string. Default unit is seconds. Could also be a Duration, e.g.: "2m", "6h"
-

LabelValueFrom

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
expressionstringNo description available
-

ArtifactRepository

-

ArtifactRepository represents an artifact repository in which a controller will store its artifacts

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
archiveLogsbooleanArchiveLogs enables log archiving
artifactoryArtifactoryArtifactRepositoryArtifactory stores artifacts to JFrog Artifactory
azureAzureArtifactRepositoryAzure stores artifact in an Azure Storage account
gcsGCSArtifactRepositoryGCS stores artifact in a GCS object store
hdfsHDFSArtifactRepositoryHDFS stores artifacts in HDFS
ossOSSArtifactRepositoryOSS stores artifact in a OSS-compliant object store
s3S3ArtifactRepositoryS3 stores artifact in a S3-compliant object store
-

MemoizationStatus

-

MemoizationStatus is the status of this memoized node

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
cacheNamestringCache is the name of the cache that was used
hitbooleanHit indicates whether this node was created from a cache entry
keystringKey is the name of the key used for this node's cache
-

NodeFlag

-

No description available

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
hookedbooleanHooked tracks whether or not this node was triggered by hook or onExit
retriedbooleanRetried tracks whether or not this node was retried by retryStrategy
-

NodeSynchronizationStatus

-

NodeSynchronizationStatus stores the status of a node

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
waitingstringWaiting is the name of the lock that this node is waiting for
-

MutexStatus

-

MutexStatus contains which objects hold mutex locks, and which objects this workflow is waiting on to release locks.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
holdingArray<MutexHolding>Holding is a list of mutexes and their respective objects that are held by mutex lock for this io.argoproj.workflow.v1alpha1.
waitingArray<MutexHolding>Waiting is a list of mutexes and their respective objects this workflow is waiting for.
-

SemaphoreStatus

-

No description available

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
holdingArray<SemaphoreHolding>Holding stores the list of resource acquired synchronization lock for workflows.
waitingArray<SemaphoreHolding>Waiting indicates the list of current synchronization lock holders.
-

ArchiveStrategy

-

ArchiveStrategy describes how to archive files/directory when saving artifacts

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
noneNoneStrategyNo description available
tarTarStrategyNo description available
zipZipStrategyNo description available
-

ArtifactGC

-

ArtifactGC describes how to delete artifacts from completed Workflows - this is embedded into the WorkflowLevelArtifactGC, and also used for individual Artifacts to override that as needed

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
podMetadataMetadataPodMetadata is an optional field for specifying the Labels and Annotations that should be assigned to the Pod doing the deletion
serviceAccountNamestringServiceAccountName is an optional field for specifying the Service Account that should be assigned to the Pod doing the deletion
strategystringStrategy is the strategy to use.
-

ArtifactoryArtifact

-

ArtifactoryArtifact is the location of an artifactory artifact

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
passwordSecretSecretKeySelectorPasswordSecret is the secret selector to the repository password
urlstringURL of the artifact
usernameSecretSecretKeySelectorUsernameSecret is the secret selector to the repository username
-

AzureArtifact

-

AzureArtifact is the location of a an Azure Storage artifact

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
accountKeySecretSecretKeySelectorAccountKeySecret is the secret selector to the Azure Blob Storage account access key
blobstringBlob is the blob name (i.e., path) in the container where the artifact resides
containerstringContainer is the container where resources will be stored
endpointstringEndpoint is the service url associated with an account. It is most likely "https://.blob.core.windows.net"
useSDKCredsbooleanUseSDKCreds tells the driver to figure out credentials based on sdk defaults.
-

GCSArtifact

-

GCSArtifact is the location of a GCS artifact

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
bucketstringBucket is the name of the bucket
keystringKey is the path in the bucket where the artifact resides
serviceAccountKeySecretSecretKeySelectorServiceAccountKeySecret is the secret selector to the bucket's service account key
-

GitArtifact

-

GitArtifact is the location of an git artifact

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
branchstringBranch is the branch to fetch when SingleBranch is enabled
depthintegerDepth specifies clones/fetches should be shallow and include the given number of commits from the branch tip
disableSubmodulesbooleanDisableSubmodules disables submodules during git clone
fetchArray< string >Fetch specifies a number of refs that should be fetched before checkout
insecureIgnoreHostKeybooleanInsecureIgnoreHostKey disables SSH strict host key checking during git clone
passwordSecretSecretKeySelectorPasswordSecret is the secret selector to the repository password
repostringRepo is the git repository
revisionstringRevision is the git commit, tag, branch to checkout
singleBranchbooleanSingleBranch enables single branch clone, using the branch parameter
sshPrivateKeySecretSecretKeySelectorSSHPrivateKeySecret is the secret selector to the repository ssh private key
usernameSecretSecretKeySelectorUsernameSecret is the secret selector to the repository username
-

HDFSArtifact

-

HDFSArtifact is the location of an HDFS artifact

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
addressesArray< string >Addresses is accessible addresses of HDFS name nodes
forcebooleanForce copies a file forcibly even if it exists
hdfsUserstringHDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used.
krbCCacheSecretSecretKeySelectorKrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos.
krbConfigConfigMapConfigMapKeySelectorKrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used.
krbKeytabSecretSecretKeySelectorKrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos.
krbRealmstringKrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used.
krbServicePrincipalNamestringKrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used.
krbUsernamestringKrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used.
pathstringPath is a file path in HDFS
-

HTTPArtifact

-

HTTPArtifact allows a file served on HTTP to be placed as an input artifact in a container

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
authHTTPAuthAuth contains information for client authentication
headersArray<Header>Headers are an optional list of headers to send with HTTP requests for artifacts
urlstringURL of the artifact
-

OSSArtifact

-

OSSArtifact is the location of an Alibaba Cloud OSS artifact

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
accessKeySecretSecretKeySelectorAccessKeySecret is the secret selector to the bucket's access key
bucketstringBucket is the name of the bucket
createBucketIfNotPresentbooleanCreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist
endpointstringEndpoint is the hostname of the bucket endpoint
keystringKey is the path in the bucket where the artifact resides
lifecycleRuleOSSLifecycleRuleLifecycleRule specifies how to manage bucket's lifecycle
secretKeySecretSecretKeySelectorSecretKeySecret is the secret selector to the bucket's secret key
securityTokenstringSecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm
useSDKCredsbooleanUseSDKCreds tells the driver to figure out credentials based on sdk defaults.
-

RawArtifact

-

RawArtifact allows raw string content to be placed as an artifact in a container

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
datastringData is the string contents of the artifact
-

S3Artifact

-

S3Artifact is the location of an S3 artifact

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
accessKeySecretSecretKeySelectorAccessKeySecret is the secret selector to the bucket's access key
bucketstringBucket is the name of the bucket
caSecretSecretKeySelectorCASecret specifies the secret that contains the CA, used to verify the TLS connection
createBucketIfNotPresentCreateS3BucketOptionsCreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is.
encryptionOptionsS3EncryptionOptionsNo description available
endpointstringEndpoint is the hostname of the bucket endpoint
insecurebooleanInsecure will connect to the service with TLS
keystringKey is the key in the bucket where the artifact resides
regionstringRegion contains the optional bucket region
roleARNstringRoleARN is the Amazon Resource Name (ARN) of the role to assume.
secretKeySecretSecretKeySelectorSecretKeySecret is the secret selector to the bucket's secret key
useSDKCredsbooleanUseSDKCreds tells the driver to figure out credentials based on sdk defaults.
-

ValueFrom

-

ValueFrom describes a location in which to obtain the value to a parameter

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
configMapKeyRefConfigMapKeySelectorConfigMapKeyRef is configmap selector for input parameter configuration
defaultstringDefault specifies a value to be used if retrieving the value from the specified source fails
eventstringSelector (https://github.com/antonmedv/expr) that is evaluated against the event to get the value of the parameter. E.g. payload.message
expressionstringExpression, if defined, is evaluated to specify the value for the parameter
jqFilterstringJQFilter expression against the resource object in resource templates
jsonPathstringJSONPath of a resource to retrieve an output parameter value from in resource templates
parameterstringParameter reference to a step or dag task in which to retrieve an output parameter value from (e.g. '{{steps.mystep.outputs.myparam}}')
pathstringPath in the container to retrieve an output parameter value from in container templates
suppliedSuppliedValueFromSupplied value to be filled in directly, either through the CLI, API, etc.
-

Counter

-

Counter is a Counter prometheus metric

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
valuestringValue is the value of the metric
-

Gauge

-

Gauge is a Gauge prometheus metric

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
operationstringOperation defines the operation to apply with value and the metrics' current value
realtimebooleanRealtime emits this metric in real time if applicable
valuestringValue is the value to be used in the operation with the metric's current value. If no operation is set, value is the value of the metric
-

Histogram

-

Histogram is a Histogram prometheus metric

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
bucketsArray<Amount>Buckets is a list of bucket divisors for the histogram
valuestringValue is the value of the metric
-

MetricLabel

-

MetricLabel is a single label for a prometheus metric

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
keystringNo description available
valuestringNo description available
-

RetryNodeAntiAffinity

-

RetryNodeAntiAffinity is a placeholder for future expansion, only empty nodeAntiAffinity is allowed. In order to prevent running steps on the same host, it uses "kubernetes.io/hostname".

-

ContainerNode

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
argsArray< string >Arguments to the entrypoint. The container image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
commandArray< string >Entrypoint array. Not executed within a shell. The container image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
dependenciesArray< string >No description available
envArray<EnvVar>List of environment variables to set in the container. Cannot be updated.
envFromArray<EnvFromSource>List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.
imagestringContainer image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.
imagePullPolicystringImage pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
lifecycleLifecycleActions that the management system should take in response to container lifecycle events. Cannot be updated.
livenessProbeProbePeriodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
namestringName of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
portsArray<ContainerPort>List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
readinessProbeProbePeriodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
resourcesResourceRequirementsCompute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
securityContextSecurityContextSecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
startupProbeProbeStartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
stdinbooleanWhether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.
stdinOncebooleanWhether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false
terminationMessagePathstringOptional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.
terminationMessagePolicystringIndicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.
ttybooleanWhether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false.
volumeDevicesArray<VolumeDevice>volumeDevices is the list of block devices to be used by the container.
volumeMountsArray<VolumeMount>Pod volumes to mount into the container's filesystem. Cannot be updated.
workingDirstringContainer's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.
-

ContainerSetRetryStrategy

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
durationstringDuration is the time between each retry, examples values are "300ms", "1s" or "5m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
retriesIntOrStringNbr of retries
-

DAGTask

-

DAGTask represents a node in the graph during DAG execution

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
argumentsArgumentsArguments are the parameter and artifact arguments to the template
continueOnContinueOnContinueOn makes argo to proceed with the following step even if this step fails. Errors and Failed states can be specified
dependenciesArray< string >Dependencies are name of other targets which this depends on
dependsstringDepends are name of other targets which this depends on
hooksLifecycleHookHooks hold the lifecycle hook which is invoked at lifecycle of task, irrespective of the success, failure, or error status of the primary task
inlineTemplateInline is the template. Template must be empty if this is declared (and vice-versa).
namestringName is the name of the target
~~onExit~~~~string~~~~OnExit is a template reference which is invoked at the end of the template, irrespective of the success, failure, or error of the primary template.~~ DEPRECATED: Use Hooks[exit].Template instead.
templatestringName of template to execute
templateRefTemplateRefTemplateRef is the reference to the template resource to execute.
whenstringWhen is an expression in which the task should conditionally execute
withItemsArray<Item>WithItems expands a task into multiple parallel tasks from the items in the list
withParamstringWithParam expands a task into multiple parallel tasks from the value in the parameter, which is expected to be a JSON list.
withSequenceSequenceWithSequence expands a task into a numeric sequence
-

DataSource

-

DataSource sources external data into a data template

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
artifactPathsArtifactPathsArtifactPaths is a data transformation that collects a list of artifact paths
-

TransformationStep

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
expressionstringExpression defines an expr expression to apply
-

HTTPBodySource

-

HTTPBodySource contains the source of the HTTP body.

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
bytesbyteNo description available
-

HTTPHeader

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringNo description available
valuestringNo description available
valueFromHTTPHeaderSourceNo description available
-

Cache

-

Cache is the configuration for the type of cache to be used

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
configMapConfigMapKeySelectorConfigMap sets a ConfigMap-based cache
-

ManifestFrom

-

No description available

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
artifactArtifactArtifact contains the artifact to use
-

ContinueOn

-

ContinueOn defines if a workflow should continue even if a task or step fails/errors. It can be specified if the workflow should continue when the pod errors, fails or both.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
errorbooleanNo description available
failedbooleanNo description available
-

Item

-

Item expands a single workflow step into multiple parallel steps The value of Item can be a map, string, bool, or number

-
-Examples with this field (click to open) -
- -
- -

Sequence

-

Sequence expands a workflow step into numeric range

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
countIntOrStringCount is number of elements in the sequence (default: 0). Not to be used with end
endIntOrStringNumber at which to end the sequence (default: 0). Not to be used with Count
formatstringFormat is a printf format string to format the value in the sequence
startIntOrStringNumber at which to start the sequence (default: 0)
-

ArtifactoryArtifactRepository

-

ArtifactoryArtifactRepository defines the controller configuration for an artifactory artifact repository

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
keyFormatstringKeyFormat defines the format of how to store keys and can reference workflow variables.
passwordSecretSecretKeySelectorPasswordSecret is the secret selector to the repository password
repoURLstringRepoURL is the url for artifactory repo.
usernameSecretSecretKeySelectorUsernameSecret is the secret selector to the repository username
-

AzureArtifactRepository

-

AzureArtifactRepository defines the controller configuration for an Azure Blob Storage artifact repository

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
accountKeySecretSecretKeySelectorAccountKeySecret is the secret selector to the Azure Blob Storage account access key
blobNameFormatstringBlobNameFormat is defines the format of how to store blob names. Can reference workflow variables
containerstringContainer is the container where resources will be stored
endpointstringEndpoint is the service url associated with an account. It is most likely "https://.blob.core.windows.net"
useSDKCredsbooleanUseSDKCreds tells the driver to figure out credentials based on sdk defaults.
-

GCSArtifactRepository

-

GCSArtifactRepository defines the controller configuration for a GCS artifact repository

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
bucketstringBucket is the name of the bucket
keyFormatstringKeyFormat defines the format of how to store keys and can reference workflow variables.
serviceAccountKeySecretSecretKeySelectorServiceAccountKeySecret is the secret selector to the bucket's service account key
-

HDFSArtifactRepository

-

HDFSArtifactRepository defines the controller configuration for an HDFS artifact repository

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
addressesArray< string >Addresses is accessible addresses of HDFS name nodes
forcebooleanForce copies a file forcibly even if it exists
hdfsUserstringHDFSUser is the user to access HDFS file system. It is ignored if either ccache or keytab is used.
krbCCacheSecretSecretKeySelectorKrbCCacheSecret is the secret selector for Kerberos ccache Either ccache or keytab can be set to use Kerberos.
krbConfigConfigMapConfigMapKeySelectorKrbConfig is the configmap selector for Kerberos config as string It must be set if either ccache or keytab is used.
krbKeytabSecretSecretKeySelectorKrbKeytabSecret is the secret selector for Kerberos keytab Either ccache or keytab can be set to use Kerberos.
krbRealmstringKrbRealm is the Kerberos realm used with Kerberos keytab It must be set if keytab is used.
krbServicePrincipalNamestringKrbServicePrincipalName is the principal name of Kerberos service It must be set if either ccache or keytab is used.
krbUsernamestringKrbUsername is the Kerberos username used with Kerberos keytab It must be set if keytab is used.
pathFormatstringPathFormat is defines the format of path to store a file. Can reference workflow variables
-

OSSArtifactRepository

-

OSSArtifactRepository defines the controller configuration for an OSS artifact repository

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
accessKeySecretSecretKeySelectorAccessKeySecret is the secret selector to the bucket's access key
bucketstringBucket is the name of the bucket
createBucketIfNotPresentbooleanCreateBucketIfNotPresent tells the driver to attempt to create the OSS bucket for output artifacts, if it doesn't exist
endpointstringEndpoint is the hostname of the bucket endpoint
keyFormatstringKeyFormat defines the format of how to store keys and can reference workflow variables.
lifecycleRuleOSSLifecycleRuleLifecycleRule specifies how to manage bucket's lifecycle
secretKeySecretSecretKeySelectorSecretKeySecret is the secret selector to the bucket's secret key
securityTokenstringSecurityToken is the user's temporary security token. For more details, check out: https://www.alibabacloud.com/help/doc-detail/100624.htm
useSDKCredsbooleanUseSDKCreds tells the driver to figure out credentials based on sdk defaults.
-

S3ArtifactRepository

-

S3ArtifactRepository defines the controller configuration for an S3 artifact repository

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
accessKeySecretSecretKeySelectorAccessKeySecret is the secret selector to the bucket's access key
bucketstringBucket is the name of the bucket
caSecretSecretKeySelectorCASecret specifies the secret that contains the CA, used to verify the TLS connection
createBucketIfNotPresentCreateS3BucketOptionsCreateBucketIfNotPresent tells the driver to attempt to create the S3 bucket for output artifacts, if it doesn't exist. Setting Enabled Encryption will apply either SSE-S3 to the bucket if KmsKeyId is not set or SSE-KMS if it is.
encryptionOptionsS3EncryptionOptionsNo description available
endpointstringEndpoint is the hostname of the bucket endpoint
insecurebooleanInsecure will connect to the service with TLS
keyFormatstringKeyFormat defines the format of how to store keys and can reference workflow variables.
~~keyPrefix~~~~string~~~~KeyPrefix is prefix used as part of the bucket key in which the controller will store artifacts.~~ DEPRECATED. Use KeyFormat instead
regionstringRegion contains the optional bucket region
roleARNstringRoleARN is the Amazon Resource Name (ARN) of the role to assume.
secretKeySecretSecretKeySelectorSecretKeySecret is the secret selector to the bucket's secret key
useSDKCredsbooleanUseSDKCreds tells the driver to figure out credentials based on sdk defaults.
-

MutexHolding

-

MutexHolding describes the mutex and the object which is holding it.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
holderstringHolder is a reference to the object which holds the Mutex. Holding Scenario: 1. Current workflow's NodeID which is holding the lock. e.g: ${NodeID} Waiting Scenario: 1. Current workflow or other workflow NodeID which is holding the lock. e.g: ${WorkflowName}/${NodeID}
mutexstringReference for the mutex e.g: ${namespace}/mutex/${mutexName}
-

SemaphoreHolding

-

No description available

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
holdersArray< string >Holders stores the list of current holder names in the io.argoproj.workflow.v1alpha1.
semaphorestringSemaphore stores the semaphore name.
-

NoneStrategy

-

NoneStrategy indicates to skip tar process and upload the files or directory tree as independent files. Note that if the artifact is a directory, the artifact driver must support the ability to save/load the directory appropriately.

-
-Examples with this field (click to open) -
- -
- -

TarStrategy

-

TarStrategy will tar and gzip the file or directory when saving

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
compressionLevelintegerCompressionLevel specifies the gzip compression level to use for the artifact. Defaults to gzip.DefaultCompression.
-

ZipStrategy

-

ZipStrategy will unzip zipped input artifacts

-

HTTPAuth

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
basicAuthBasicAuthNo description available
clientCertClientCertAuthNo description available
oauth2OAuth2AuthNo description available
- -

Header indicate a key-value request header to be used when fetching artifacts over HTTP

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringName is the header name
valuestringValue is the literal value to use for the header
-

OSSLifecycleRule

-

OSSLifecycleRule specifies how to manage bucket's lifecycle

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
markDeletionAfterDaysintegerMarkDeletionAfterDays is the number of days before we delete objects in the bucket
markInfrequentAccessAfterDaysintegerMarkInfrequentAccessAfterDays is the number of days before we convert the objects in the bucket to Infrequent Access (IA) storage type
-

CreateS3BucketOptions

-

CreateS3BucketOptions options used to determine automatic automatic bucket-creation process

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
objectLockingbooleanObjectLocking Enable object locking
-

S3EncryptionOptions

-

S3EncryptionOptions used to determine encryption options during s3 operations

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
enableEncryptionbooleanEnableEncryption tells the driver to encrypt objects if set to true. If kmsKeyId and serverSideCustomerKeySecret are not set, SSE-S3 will be used
kmsEncryptionContextstringKmsEncryptionContext is a json blob that contains an encryption context. See https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context for more information
kmsKeyIdstringKMSKeyId tells the driver to encrypt the object using the specified KMS Key.
serverSideCustomerKeySecretSecretKeySelectorServerSideCustomerKeySecret tells the driver to encrypt the output artifacts using SSE-C with the specified secret.
-

SuppliedValueFrom

-

SuppliedValueFrom is a placeholder for a value to be filled in directly, either through the CLI, API, etc.

-
-Examples with this field (click to open) -
- -
- -

Amount

-

Amount represent a numeric amount.

-
-Examples with this field (click to open) -
- -
- -

ArtifactPaths

-

ArtifactPaths expands a step from a collection of artifacts

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
archiveArchiveStrategyArchive controls how the artifact will be saved to the artifact repository.
archiveLogsbooleanArchiveLogs indicates if the container logs should be archived
artifactGCArtifactGCArtifactGC describes the strategy to use when to deleting an artifact from completed or deleted workflows
artifactoryArtifactoryArtifactArtifactory contains artifactory artifact location details
azureAzureArtifactAzure contains Azure Storage artifact location details
deletedbooleanHas this been deleted?
fromstringFrom allows an artifact to reference an artifact from a previous step
fromExpressionstringFromExpression, if defined, is evaluated to specify the value for the artifact
gcsGCSArtifactGCS contains GCS artifact location details
gitGitArtifactGit contains git artifact location details
globalNamestringGlobalName exports an output artifact to the global scope, making it available as '{{io.argoproj.workflow.v1alpha1.outputs.artifacts.XXXX}} and in workflow.status.outputs.artifacts
hdfsHDFSArtifactHDFS contains HDFS artifact location details
httpHTTPArtifactHTTP contains HTTP artifact location details
modeintegermode bits to use on this file, must be a value between 0 and 0777 set when loading input artifacts.
namestringname of the artifact. must be unique within a template's inputs/outputs.
optionalbooleanMake Artifacts optional, if Artifacts doesn't generate or exist
ossOSSArtifactOSS contains OSS artifact location details
pathstringPath is the container path to the artifact
rawRawArtifactRaw contains raw artifact location details
recurseModebooleanIf mode is set, apply the permission recursively into the artifact if it is a folder
s3S3ArtifactS3 contains S3 artifact location details
subPathstringSubPath allows an artifact to be sourced from a subpath within the specified source
-

HTTPHeaderSource

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
secretKeyRefSecretKeySelectorNo description available
-

BasicAuth

-

BasicAuth describes the secret selectors required for basic authentication

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
passwordSecretSecretKeySelectorPasswordSecret is the secret selector to the repository password
usernameSecretSecretKeySelectorUsernameSecret is the secret selector to the repository username
-

ClientCertAuth

-

ClientCertAuth holds necessary information for client authentication via certificates

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
clientCertSecretSecretKeySelectorNo description available
clientKeySecretSecretKeySelectorNo description available
-

OAuth2Auth

-

OAuth2Auth holds all information for client authentication via OAuth2 tokens

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
clientIDSecretSecretKeySelectorNo description available
clientSecretSecretSecretKeySelectorNo description available
endpointParamsArray<OAuth2EndpointParam>No description available
scopesArray< string >No description available
tokenURLSecretSecretKeySelectorNo description available
-

OAuth2EndpointParam

-

EndpointParam is for requesting optional fields that should be sent in the oauth request

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
keystringName is the header name
valuestringValue is the literal value to use for the header
-

External Fields

-

ObjectMeta

-

ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
annotationsMap< string , string >Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations
clusterNamestringThe name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request.
creationTimestampTimeCreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC. Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
deletionGracePeriodSecondsintegerNumber of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.
deletionTimestampTimeDeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested. Populated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
finalizersArray< string >Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list.
generateNamestringGenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server. If this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header). Applied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency
generationintegerA sequence number representing a specific generation of the desired state. Populated by the system. Read-only.
labelsMap< string , string >Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels
managedFieldsArray<ManagedFieldsEntry>ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like "ci-cd". The set of fields is always in the version that the workflow used when modifying the object.
namestringName must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names
namespacestringNamespace defines the space within which each name must be unique. An empty namespace is equivalent to the "default" namespace, but "default" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty. Must be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces
ownerReferencesArray<OwnerReference>List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.
resourceVersionstringAn opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources. Populated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
~~selfLink~~~~string~~~~SelfLink is a URL representing this object. Populated by the system. Read-only.~~ DEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release.
uidstringUID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations. Populated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids
-

Affinity

-

Affinity is a group of affinity scheduling rules.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
nodeAffinityNodeAffinityDescribes node affinity scheduling rules for the pod.
podAffinityPodAffinityDescribes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)).
podAntiAffinityPodAntiAffinityDescribes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)).
-

PodDNSConfig

-

PodDNSConfig defines the DNS parameters of a pod in addition to those generated from DNSPolicy.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
nameserversArray< string >A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed.
optionsArray<PodDNSConfigOption>A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy.
searchesArray< string >A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed.
-

HostAlias

-

HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
hostnamesArray< string >Hostnames for the above IP address.
ipstringIP address of the host file entry.
-

LocalObjectReference

-

LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
-

PodDisruptionBudgetSpec

-

PodDisruptionBudgetSpec is a description of a PodDisruptionBudget.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
maxUnavailableIntOrStringAn eviction is allowed if at most "maxUnavailable" pods selected by "selector" are unavailable after the eviction, i.e. even in absence of the evicted pod. For example, one can prevent all voluntary evictions by specifying 0. This is a mutually exclusive setting with "minAvailable".
minAvailableIntOrStringAn eviction is allowed if at least "minAvailable" pods selected by "selector" will still be available after the eviction, i.e. even in the absence of the evicted pod. So for example you can prevent all voluntary evictions by specifying "100%".
selectorLabelSelectorLabel query over pods whose evictions are managed by the disruption budget. A null selector will match no pods, while an empty ({}) selector will select all pods within the namespace.
-

PodSecurityContext

-

PodSecurityContext holds pod-level security attributes and common container settings. Some fields are also present in container.securityContext. Field values of container.securityContext take precedence over field values of PodSecurityContext.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsGroupintegerA special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows.
fsGroupChangePolicystringfsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows.
runAsGroupintegerThe GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
runAsNonRootbooleanIndicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
runAsUserintegerThe UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
seLinuxOptionsSELinuxOptionsThe SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows.
seccompProfileSeccompProfileThe seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows.
supplementalGroupsArray< integer >A list of groups applied to the first process run in each container, in addition to the container's primary GID. If unspecified, no groups will be added to any container. Note that this field cannot be set when spec.os.name is windows.
sysctlsArray<Sysctl>Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows.
windowsOptionsWindowsSecurityContextOptionsThe Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
-

Toleration

-

The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator .

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
effectstringEffect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler.
keystringKey is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys.
operatorstringOperator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. Possible enum values: - "Equal" - "Exists"
tolerationSecondsintegerTolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system.
valuestringValue is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string.
-

PersistentVolumeClaim

-

PersistentVolumeClaim is a user's request for and claim to a persistent volume

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
apiVersionstringAPIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kindstringKind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadataObjectMetaStandard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
specPersistentVolumeClaimSpecSpec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
statusPersistentVolumeClaimStatusStatus represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
-

Volume

-

Volume represents a named volume in a pod that may be accessed by any container in the pod.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
awsElasticBlockStoreAWSElasticBlockStoreVolumeSourceAWSElasticBlockStore represents an AWS Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
azureDiskAzureDiskVolumeSourceAzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.
azureFileAzureFileVolumeSourceAzureFile represents an Azure File Service mount on the host and bind mount to the pod.
cephfsCephFSVolumeSourceCephFS represents a Ceph FS mount on the host that shares a pod's lifetime
cinderCinderVolumeSourceCinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
configMapConfigMapVolumeSourceConfigMap represents a configMap that should populate this volume
csiCSIVolumeSourceCSI (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature).
downwardAPIDownwardAPIVolumeSourceDownwardAPI represents downward API about the pod that should populate this volume
emptyDirEmptyDirVolumeSourceEmptyDir represents a temporary directory that shares a pod's lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
ephemeralEphemeralVolumeSourceEphemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. A pod can use both types of ephemeral volumes and persistent volumes at the same time.
fcFCVolumeSourceFC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod.
flexVolumeFlexVolumeSourceFlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.
flockerFlockerVolumeSourceFlocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running
gcePersistentDiskGCEPersistentDiskVolumeSourceGCEPersistentDisk represents a GCE Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
~~gitRepo~~~~GitRepoVolumeSource~~~~GitRepo represents a git repository at a particular revision.~~ DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container.
glusterfsGlusterfsVolumeSourceGlusterfs represents a Glusterfs mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md
hostPathHostPathVolumeSourceHostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
iscsiISCSIVolumeSourceISCSI represents an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md
namestringVolume's name. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
nfsNFSVolumeSourceNFS represents an NFS mount on the host that shares a pod's lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
persistentVolumeClaimPersistentVolumeClaimVolumeSourcePersistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
photonPersistentDiskPhotonPersistentDiskVolumeSourcePhotonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine
portworxVolumePortworxVolumeSourcePortworxVolume represents a portworx volume attached and mounted on kubelets host machine
projectedProjectedVolumeSourceItems for all in one resources secrets, configmaps, and downward API
quobyteQuobyteVolumeSourceQuobyte represents a Quobyte mount on the host that shares a pod's lifetime
rbdRBDVolumeSourceRBD represents a Rados Block Device mount on the host that shares a pod's lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md
scaleIOScaleIOVolumeSourceScaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes.
secretSecretVolumeSourceSecret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret
storageosStorageOSVolumeSourceStorageOS represents a StorageOS volume attached and mounted on Kubernetes nodes.
vsphereVolumeVsphereVirtualDiskVolumeSourceVsphereVolume represents a vSphere volume attached and mounted on kubelets host machine
-

Time

-

Time is a wrapper around time.Time which supports correct marshaling to YAML and JSON. Wrappers are provided for many of the factory methods that the time package offers.

-

ObjectReference

-

ObjectReference contains enough information to let you inspect or modify the referred object.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
apiVersionstringAPI version of the referent.
fieldPathstringIf referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object.
kindstringKind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
namespacestringNamespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/
resourceVersionstringSpecific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
uidstringUID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids
-

Duration

-

Duration is a wrapper around time.Duration which supports correct marshaling to YAML and JSON. In particular, it marshals into strings, which can be used as map keys in json.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
durationstringNo description available
-

LabelSelector

-

A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
matchExpressionsArray<LabelSelectorRequirement>matchExpressions is a list of label selector requirements. The requirements are ANDed.
matchLabelsMap< string , string >matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
-

IntOrString

-

No description available

-
-Examples with this field (click to open) -
- -
- -

Container

-

A single application container that you want to run within a pod.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
argsArray< string >Arguments to the entrypoint. The docker image's CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
commandArray< string >Entrypoint array. Not executed within a shell. The docker image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell
envArray<EnvVar>List of environment variables to set in the container. Cannot be updated.
envFromArray<EnvFromSource>List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.
imagestringDocker image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.
imagePullPolicystringImage pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images Possible enum values: - "Always" means that kubelet always attempts to pull the latest image. Container will fail If the pull fails. - "IfNotPresent" means that kubelet pulls if the image isn't present on disk. Container will fail if the image isn't present and the pull fails. - "Never" means that kubelet never pulls an image, but only uses a local image. Container will fail if the image isn't present
lifecycleLifecycleActions that the management system should take in response to container lifecycle events. Cannot be updated.
livenessProbeProbePeriodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
namestringName of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated.
portsArray<ContainerPort>List of ports to expose from the container. Exposing a port here gives the system additional information about the network connections a container uses, but is primarily informational. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Cannot be updated.
readinessProbeProbePeriodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
resourcesResourceRequirementsCompute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
securityContextSecurityContextSecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
startupProbeProbeStartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
stdinbooleanWhether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false.
stdinOncebooleanWhether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false
terminationMessagePathstringOptional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.
terminationMessagePolicystringIndicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. Possible enum values: - "FallbackToLogsOnError" will read the most recent contents of the container logs for the container status message when the container exits with an error and the terminationMessagePath has no contents. - "File" is the default behavior and will set the container status message to the contents of the container's terminationMessagePath when the container exits.
ttybooleanWhether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false.
volumeDevicesArray<VolumeDevice>volumeDevices is the list of block devices to be used by the container.
volumeMountsArray<VolumeMount>Pod volumes to mount into the container's filesystem. Cannot be updated.
workingDirstringContainer's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated.
-

ConfigMapKeySelector

-

Selects a key from a ConfigMap.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
keystringThe key to select.
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
optionalbooleanSpecify whether the ConfigMap or its key must be defined
-

VolumeMount

-

VolumeMount describes a mounting of a Volume within a container.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
mountPathstringPath within the container at which the volume should be mounted. Must not contain ':'.
mountPropagationstringmountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.
namestringThis must match the Name of a Volume.
readOnlybooleanMounted read-only if true, read-write otherwise (false or unspecified). Defaults to false.
subPathstringPath within the volume from which the container's volume should be mounted. Defaults to "" (volume's root).
subPathExprstringExpanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive.
-

EnvVar

-

EnvVar represents an environment variable present in a Container.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringName of the environment variable. Must be a C_IDENTIFIER.
valuestringVariable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "".
valueFromEnvVarSourceSource for the environment variable's value. Cannot be used if value is not empty.
-

EnvFromSource

-

EnvFromSource represents the source of a set of ConfigMaps

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
configMapRefConfigMapEnvSourceThe ConfigMap to select from
prefixstringAn optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER.
secretRefSecretEnvSourceThe Secret to select from
-

Lifecycle

-

Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
postStartLifecycleHandlerPostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
preStopLifecycleHandlerPreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod's termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks
-

Probe

-

Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
execExecActionExec specifies the action to take.
failureThresholdintegerMinimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.
grpcGRPCActionGRPC specifies an action involving a GRPC port. This is an alpha field and requires enabling GRPCContainerProbe feature gate.
httpGetHTTPGetActionHTTPGet specifies the http request to perform.
initialDelaySecondsintegerNumber of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
periodSecondsintegerHow often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
successThresholdintegerMinimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
tcpSocketTCPSocketActionTCPSocket specifies an action involving a TCP port.
terminationGracePeriodSecondsintegerOptional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset.
timeoutSecondsintegerNumber of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
-

ContainerPort

-

ContainerPort represents a network port in a single container.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
containerPortintegerNumber of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536.
hostIPstringWhat host IP to bind the external port to.
hostPortintegerNumber of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this.
namestringIf specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services.
protocolstringProtocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". Possible enum values: - "SCTP" is the SCTP protocol. - "TCP" is the TCP protocol. - "UDP" is the UDP protocol.
-

ResourceRequirements

-

ResourceRequirements describes the compute resource requirements.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
limitsQuantityLimits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
requestsQuantityRequests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
-

SecurityContext

-

SecurityContext holds security configuration that will be applied to a container. Some fields are present in both SecurityContext and PodSecurityContext. When both are set, the values in SecurityContext take precedence.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
allowPrivilegeEscalationbooleanAllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.
capabilitiesCapabilitiesThe capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows.
privilegedbooleanRun container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows.
procMountstringprocMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows.
readOnlyRootFilesystembooleanWhether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows.
runAsGroupintegerThe GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
runAsNonRootbooleanIndicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
runAsUserintegerThe UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
seLinuxOptionsSELinuxOptionsThe SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows.
seccompProfileSeccompProfileThe seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows.
windowsOptionsWindowsSecurityContextOptionsThe Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux.
-

VolumeDevice

-

volumeDevice describes a mapping of a raw block device within a container.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
devicePathstringdevicePath is the path inside of the container that the device will be mapped to.
namestringname must match the name of a persistentVolumeClaim in the pod
-

SecretKeySelector

-

SecretKeySelector selects a key of a Secret.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
keystringThe key of the secret to select from. Must be a valid secret key.
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
optionalbooleanSpecify whether the Secret or its key must be defined
-

ManagedFieldsEntry

-

ManagedFieldsEntry is a workflow-id, a FieldSet and the group version of the resource that the fieldset applies to.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
apiVersionstringAPIVersion defines the version of this resource that this field set applies to. The format is "group/version" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted.
fieldsTypestringFieldsType is the discriminator for the different fields format and version. There is currently only one possible value: "FieldsV1"
fieldsV1FieldsV1FieldsV1 holds the first JSON version format as described in the "FieldsV1" type.
managerstringManager is an identifier of the workflow managing these fields.
operationstringOperation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'.
subresourcestringSubresource is the name of the subresource used to update that object, or empty string if the object was updated through the main resource. The value of this field is used to distinguish between managers, even if they share the same name. For example, a status update will be distinct from a regular update using the same manager name. Note that the APIVersion field is not related to the Subresource field and it always corresponds to the version of the main resource.
timeTimeTime is timestamp of when these fields were set. It should always be empty if Operation is 'Apply'
-

OwnerReference

-

OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
apiVersionstringAPI version of the referent.
blockOwnerDeletionbooleanIf true, AND if the owner has the "foregroundDeletion" finalizer, then the owner cannot be deleted from the key-value store until this reference is removed. Defaults to false. To set this field, a user needs "delete" permission of the owner, otherwise 422 (Unprocessable Entity) will be returned.
controllerbooleanIf true, this reference points to the managing controller.
kindstringKind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
namestringName of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names
uidstringUID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids
-

NodeAffinity

-

Node affinity is a group of node affinity scheduling rules.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
preferredDuringSchedulingIgnoredDuringExecutionArray<PreferredSchedulingTerm>The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred.
requiredDuringSchedulingIgnoredDuringExecutionNodeSelectorIf the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node.
-

PodAffinity

-

Pod affinity is a group of inter pod affinity scheduling rules.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
preferredDuringSchedulingIgnoredDuringExecutionArray<WeightedPodAffinityTerm>The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
requiredDuringSchedulingIgnoredDuringExecutionArray<PodAffinityTerm>If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
-

PodAntiAffinity

-

Pod anti affinity is a group of inter pod anti affinity scheduling rules.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
preferredDuringSchedulingIgnoredDuringExecutionArray<WeightedPodAffinityTerm>The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred.
requiredDuringSchedulingIgnoredDuringExecutionArray<PodAffinityTerm>If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied.
-

PodDNSConfigOption

-

PodDNSConfigOption defines DNS resolver options of a pod.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringRequired.
valuestringNo description available
-

SELinuxOptions

-

SELinuxOptions are the labels to be applied to the container

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
levelstringLevel is SELinux level label that applies to the container.
rolestringRole is a SELinux role label that applies to the container.
typestringType is a SELinux type label that applies to the container.
userstringUser is a SELinux user label that applies to the container.
-

SeccompProfile

-

SeccompProfile defines a pod/container's seccomp profile settings. Only one profile source may be set.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
localhostProfilestringlocalhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must only be set if type is "Localhost".
typestringtype indicates which kind of seccomp profile will be applied. Valid options are: Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied. Possible enum values: - "Localhost" indicates a profile defined in a file on the node should be used. The file's location relative to /seccomp. - "RuntimeDefault" represents the default container runtime seccomp profile. - "Unconfined" indicates no seccomp profile is applied (A.K.A. unconfined).
-

Sysctl

-

Sysctl defines a kernel parameter to be set

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringName of a property to set
valuestringValue of a property to set
-

WindowsSecurityContextOptions

-

WindowsSecurityContextOptions contain Windows-specific options and credentials.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
gmsaCredentialSpecstringGMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field.
gmsaCredentialSpecNamestringGMSACredentialSpecName is the name of the GMSA credential spec to use.
hostProcessbooleanHostProcess determines if a container should be run as a 'Host Process' container. This field is alpha-level and will only be honored by components that enable the WindowsHostProcessContainers feature flag. Setting this field without the feature flag will result in errors when validating the Pod. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true.
runAsUserNamestringThe UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence.
-

PersistentVolumeClaimSpec

-

PersistentVolumeClaimSpec describes the common attributes of storage devices and allows a Source for provider-specific attributes

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
accessModesArray< string >AccessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
dataSourceTypedLocalObjectReferenceThis field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. If the AnyVolumeDataSource feature gate is enabled, this field will always have the same contents as the DataSourceRef field.
dataSourceRefTypedLocalObjectReferenceSpecifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any local object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the DataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, both fields (DataSource and DataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. There are two important differences between DataSource and DataSourceRef: * While DataSource only allows two specific types of objects, DataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While DataSource ignores disallowed values (dropping them), DataSourceRef preserves all values, and generates an error if a disallowed value is specified. (Alpha) Using this field requires the AnyVolumeDataSource feature gate to be enabled.
resourcesResourceRequirementsResources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources
selectorLabelSelectorA label query over volumes to consider for binding.
storageClassNamestringName of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1
volumeModestringvolumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec.
volumeNamestringVolumeName is the binding reference to the PersistentVolume backing this claim.
-

PersistentVolumeClaimStatus

-

PersistentVolumeClaimStatus is the current status of a persistent volume claim.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
accessModesArray< string >AccessModes contains the actual access modes the volume backing the PVC has. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1
allocatedResourcesQuantityThe storage resource within AllocatedResources tracks the capacity allocated to a PVC. It may be larger than the actual capacity when a volume expansion operation is requested. For storage quota, the larger value from allocatedResources and PVC.spec.resources is used. If allocatedResources is not set, PVC.spec.resources alone is used for quota calculation. If a volume expansion capacity request is lowered, allocatedResources is only lowered if there are no expansion operations in progress and if the actual volume capacity is equal or lower than the requested capacity. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
capacityQuantityRepresents the actual resources of the underlying volume.
conditionsArray<PersistentVolumeClaimCondition>Current Condition of persistent volume claim. If underlying persistent volume is being resized then the Condition will be set to 'ResizeStarted'.
phasestringPhase represents the current phase of PersistentVolumeClaim. Possible enum values: - "Bound" used for PersistentVolumeClaims that are bound - "Lost" used for PersistentVolumeClaims that lost their underlying PersistentVolume. The claim was bound to a PersistentVolume and this volume does not exist any longer and all data on it was lost. - "Pending" used for PersistentVolumeClaims that are not yet bound
resizeStatusstringResizeStatus stores status of resize operation. ResizeStatus is not set by default but when expansion is complete resizeStatus is set to empty string by resize controller or kubelet. This is an alpha field and requires enabling RecoverVolumeExpansionFailure feature.
-

AWSElasticBlockStoreVolumeSource

-

Represents a Persistent Disk resource in AWS. An AWS EBS disk must exist before mounting to a container. The disk must also be in the same AWS zone as the kubelet. An AWS EBS disk can only be mounted as read/write once. AWS EBS volumes support ownership management and SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFilesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
partitionintegerThe partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty).
readOnlybooleanSpecify "true" to force and set the ReadOnly property in VolumeMounts to "true". If omitted, the default is "false". More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
volumeIDstringUnique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
-

AzureDiskVolumeSource

-

AzureDisk represents an Azure Data Disk mount on the host and bind mount to the pod.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
cachingModestringHost Caching mode: None, Read Only, Read Write.
diskNamestringThe Name of the data disk in the blob storage
diskURIstringThe URI the data disk in the blob storage
fsTypestringFilesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
kindstringExpected values Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared
readOnlybooleanDefaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
-

AzureFileVolumeSource

-

AzureFile represents an Azure File Service mount on the host and bind mount to the pod.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
readOnlybooleanDefaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
secretNamestringthe name of secret that contains Azure Storage Account Name and Key
shareNamestringShare Name
-

CephFSVolumeSource

-

Represents a Ceph Filesystem mount that lasts the lifetime of a pod Cephfs volumes do not support ownership management or SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
monitorsArray< string >Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
pathstringOptional: Used as the mounted root, rather than the full Ceph tree, default is /
readOnlybooleanOptional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
secretFilestringOptional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
secretRefLocalObjectReferenceOptional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
userstringOptional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it
-

CinderVolumeSource

-

Represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet. Cinder volumes support ownership management and SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFilesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
readOnlybooleanOptional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
secretRefLocalObjectReferenceOptional: points to a secret object containing parameters used to connect to OpenStack.
volumeIDstringvolume id used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md
-

ConfigMapVolumeSource

-

Adapts a ConfigMap into a volume. The contents of the target ConfigMap's Data field will be presented in a volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. ConfigMap volumes support ownership management and SELinux relabeling.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
defaultModeintegerOptional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
itemsArray<KeyToPath>If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
optionalbooleanSpecify whether the ConfigMap or its keys must be defined
-

CSIVolumeSource

-

Represents a source location of a volume to mount, managed by an external CSI driver

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
driverstringDriver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster.
fsTypestringFilesystem type to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply.
nodePublishSecretRefLocalObjectReferenceNodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed.
readOnlybooleanSpecifies a read-only configuration for the volume. Defaults to false (read/write).
volumeAttributesMap< string , string >VolumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values.
-

DownwardAPIVolumeSource

-

DownwardAPIVolumeSource represents a volume containing downward API info. Downward API volumes support ownership management and SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
defaultModeintegerOptional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
itemsArray<DownwardAPIVolumeFile>Items is a list of downward API volume file
-

EmptyDirVolumeSource

-

Represents an empty directory for a pod. Empty directory volumes support ownership management and SELinux relabeling.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
mediumstringWhat type of storage medium should back this directory. The default is "" which means to use the node's default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
sizeLimitQuantityTotal amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir
-

EphemeralVolumeSource

-

Represents an ephemeral volume that is handled by a normal storage driver.

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
volumeClaimTemplatePersistentVolumeClaimTemplateWill be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be <pod name>-<volume name> where <volume name> is the name from the PodSpec.Volumes array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). An existing PVC with that name that is not owned by the pod will not be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. Required, must not be nil.
-

FCVolumeSource

-

Represents a Fibre Channel volume. Fibre Channel volumes can only be mounted as read/write once. Fibre Channel volumes support ownership management and SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFilesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
lunintegerOptional: FC target lun number
readOnlybooleanOptional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
targetWWNsArray< string >Optional: FC target worldwide names (WWNs)
wwidsArray< string >Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.
-

FlexVolumeSource

-

FlexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
driverstringDriver is the name of the driver to use for this volume.
fsTypestringFilesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script.
optionsMap< string , string >Optional: Extra command options if any.
readOnlybooleanOptional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
secretRefLocalObjectReferenceOptional: SecretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.
-

FlockerVolumeSource

-

Represents a Flocker volume mounted by the Flocker agent. One and only one of datasetName and datasetUUID should be set. Flocker volumes do not support ownership management or SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
datasetNamestringName of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated
datasetUUIDstringUUID of the dataset. This is unique identifier of a Flocker dataset
-

GCEPersistentDiskVolumeSource

-

Represents a Persistent Disk resource in Google Compute Engine. A GCE PD must exist before mounting to a container. The disk must also be in the same GCE project and zone as the kubelet. A GCE PD can only be mounted as read/write once or read-only many times. GCE PDs support ownership management and SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFilesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
partitionintegerThe partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
pdNamestringUnique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
readOnlybooleanReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
-

GitRepoVolumeSource

-

Represents a volume that is populated with the contents of a git repository. Git repo volumes do not support ownership management. Git repo volumes support SELinux relabeling. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod's container.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
directorystringTarget directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name.
repositorystringRepository URL
revisionstringCommit hash for the specified revision.
-

GlusterfsVolumeSource

-

Represents a Glusterfs mount that lasts the lifetime of a pod. Glusterfs volumes do not support ownership management or SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
endpointsstringEndpointsName is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
pathstringPath is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
readOnlybooleanReadOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod
-

HostPathVolumeSource

-

Represents a host path mapped into a pod. Host path volumes do not support ownership management or SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
pathstringPath of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
typestringType for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
-

ISCSIVolumeSource

-

Represents an ISCSI disk. ISCSI volumes can only be mounted as read/write once. ISCSI volumes support ownership management and SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
chapAuthDiscoverybooleanwhether support iSCSI Discovery CHAP authentication
chapAuthSessionbooleanwhether support iSCSI Session CHAP authentication
fsTypestringFilesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi
initiatorNamestringCustom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection.
iqnstringTarget iSCSI Qualified Name.
iscsiInterfacestringiSCSI Interface Name that uses an iSCSI transport. Defaults to 'default' (tcp).
lunintegeriSCSI Target Lun number.
portalsArray< string >iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
readOnlybooleanReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false.
secretRefLocalObjectReferenceCHAP Secret for iSCSI target and initiator authentication
targetPortalstringiSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260).
-

NFSVolumeSource

-

Represents an NFS mount that lasts the lifetime of a pod. NFS volumes do not support ownership management or SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
pathstringPath that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
readOnlybooleanReadOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
serverstringServer is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs
-

PersistentVolumeClaimVolumeSource

-

PersistentVolumeClaimVolumeSource references the user's PVC in the same namespace. This volume finds the bound PV and mounts that volume for the pod. A PersistentVolumeClaimVolumeSource is, essentially, a wrapper around another type of volume that is owned by someone else (the system).

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
claimNamestringClaimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims
readOnlybooleanWill force the ReadOnly setting in VolumeMounts. Default false.
-

PhotonPersistentDiskVolumeSource

-

Represents a Photon Controller persistent disk resource.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFilesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
pdIDstringID that identifies Photon Controller persistent disk
-

PortworxVolumeSource

-

PortworxVolumeSource represents a Portworx volume resource.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified.
readOnlybooleanDefaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
volumeIDstringVolumeID uniquely identifies a Portworx volume
-

ProjectedVolumeSource

-

Represents a projected volume source

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
defaultModeintegerMode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
sourcesArray<VolumeProjection>list of volume projections
-

QuobyteVolumeSource

-

Represents a Quobyte mount that lasts the lifetime of a pod. Quobyte volumes do not support ownership management or SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
groupstringGroup to map volume access to Default is no group
readOnlybooleanReadOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false.
registrystringRegistry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes
tenantstringTenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin
userstringUser to map volume access to Defaults to serivceaccount user
volumestringVolume is a string that references an already created Quobyte volume by name.
-

RBDVolumeSource

-

Represents a Rados Block Device mount that lasts the lifetime of a pod. RBD volumes support ownership management and SELinux relabeling.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFilesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd
imagestringThe rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
keyringstringKeyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
monitorsArray< string >A collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
poolstringThe rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
readOnlybooleanReadOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
secretRefLocalObjectReferenceSecretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
userstringThe rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it
-

ScaleIOVolumeSource

-

ScaleIOVolumeSource represents a persistent ScaleIO volume

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFilesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs".
gatewaystringThe host address of the ScaleIO API Gateway.
protectionDomainstringThe name of the ScaleIO Protection Domain for the configured storage.
readOnlybooleanDefaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
secretRefLocalObjectReferenceSecretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail.
sslEnabledbooleanFlag to enable/disable SSL communication with Gateway, default false
storageModestringIndicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned.
storagePoolstringThe ScaleIO Storage Pool associated with the protection domain.
systemstringThe name of the storage system as configured in ScaleIO.
volumeNamestringThe name of a volume already created in the ScaleIO system that is associated with this volume source.
-

SecretVolumeSource

-

Adapts a Secret into a volume. The contents of the target Secret's Data field will be presented in a volume as files using the keys in the Data field as the file names. Secret volumes support ownership management and SELinux relabeling.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
defaultModeintegerOptional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
itemsArray<KeyToPath>If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.
optionalbooleanSpecify whether the Secret or its keys must be defined
secretNamestringName of the secret in the pod's namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret
-

StorageOSVolumeSource

-

Represents a StorageOS persistent volume resource.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFilesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
readOnlybooleanDefaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.
secretRefLocalObjectReferenceSecretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted.
volumeNamestringVolumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace.
volumeNamespacestringVolumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created.
-

VsphereVirtualDiskVolumeSource

-

Represents a vSphere volume resource.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fsTypestringFilesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified.
storagePolicyIDstringStorage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName.
storagePolicyNamestringStorage Policy Based Management (SPBM) profile name.
volumePathstringPath that identifies vSphere volume vmdk
-

LabelSelectorRequirement

-

A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
keystringkey is the label key that the selector applies to.
operatorstringoperator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
valuesArray< string >values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
-

EnvVarSource

-

EnvVarSource represents a source for the value of an EnvVar.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
configMapKeyRefConfigMapKeySelectorSelects a key of a ConfigMap.
fieldRefObjectFieldSelectorSelects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels['<KEY>'], metadata.annotations['<KEY>'], spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
resourceFieldRefResourceFieldSelectorSelects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.
secretKeyRefSecretKeySelectorSelects a key of a secret in the pod's namespace
-

ConfigMapEnvSource

-

ConfigMapEnvSource selects a ConfigMap to populate the environment variables with. The contents of the target ConfigMap's Data field will represent the key-value pairs as environment variables.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
optionalbooleanSpecify whether the ConfigMap must be defined
-

SecretEnvSource

-

SecretEnvSource selects a Secret to populate the environment variables with. The contents of the target Secret's Data field will represent the key-value pairs as environment variables.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
optionalbooleanSpecify whether the Secret must be defined
-

LifecycleHandler

-

LifecycleHandler defines a specific action that should be taken in a lifecycle hook. One and only one of the fields, except TCPSocket must be specified.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
execExecActionExec specifies the action to take.
httpGetHTTPGetActionHTTPGet specifies the http request to perform.
tcpSocketTCPSocketActionDeprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified.
-

ExecAction

-

ExecAction describes a "run in container" action.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
commandArray< string >Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('
-

GRPCAction

-

No description available

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
portintegerPort number of the gRPC service. Number must be in the range 1 to 65535.
servicestringService is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). If this is not specified, the default behavior is defined by gRPC.
-

HTTPGetAction

-

HTTPGetAction describes an action based on HTTP Get requests.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
hoststringHost name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead.
httpHeadersArray<HTTPHeader>Custom headers to set in the request. HTTP allows repeated headers.
pathstringPath to access on the HTTP server.
portIntOrStringName or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
schemestringScheme to use for connecting to the host. Defaults to HTTP. Possible enum values: - "HTTP" means that the scheme used will be http:// - "HTTPS" means that the scheme used will be https://
-

TCPSocketAction

-

TCPSocketAction describes an action based on opening a socket

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
hoststringOptional: Host name to connect to, defaults to the pod IP.
portIntOrStringNumber or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME.
-

Quantity

-

Quantity is a fixed-point representation of a number. It provides convenient marshaling/unmarshaling in JSON and YAML, in addition to String() and AsInt64() accessors. The serialization format is: ::= (Note that may be empty, from the "" case in .) ::= 0 | 1 | ... | 9 ::= | ::= | . | . | . ::= "+" | "-" ::= | ::= | | ::= Ki | Mi | Gi | Ti | Pi | Ei (International System of units; See: http://physics.nist.gov/cuu/Units/binary.html) ::= m | "" | k | M | G | T | P | E (Note that 1024 = 1Ki but 1000 = 1k; I didn't choose the capitalization.) ::= "e" | "E" No matter which of the three exponent forms is used, no quantity may represent a number greater than 2^63-1 in magnitude, nor may it have more than 3 decimal places. Numbers larger or more precise will be capped or rounded up. (E.g.: 0.1m will rounded up to 1m.) This may be extended in the future if we require larger or smaller quantities. When a Quantity is parsed from a string, it will remember the type of suffix it had, and will use the same type again when it is serialized. Before serializing, Quantity will be put in "canonical form". This means that Exponent/suffix will be adjusted up or down (with a corresponding increase or decrease in Mantissa) such that: a. No precision is lost b. No fractional digits will be emitted c. The exponent (or suffix) is as large as possible. The sign will be omitted unless the number is negative. Examples: 1.5 will be serialized as "1500m" 1.5Gi will be serialized as "1536Mi" Note that the quantity will NEVER be internally represented by a floating point number. That is the whole point of this exercise. Non-canonical values will still parse as long as they are well formed, but will be re-emitted in their canonical form. (So always use canonical form, or don't diff.) This format is intended to make it difficult to use these numbers without writing some sort of special handling code in the hopes that that will cause implementors to also use a fixed point implementation.

-
-Examples with this field (click to open) -
- -
- -

Capabilities

-

Adds and removes POSIX capabilities from running containers.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
addArray< string >Added capabilities
dropArray< string >Removed capabilities
-

FieldsV1

-

FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format. Each key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f:', where is the name of a field in a struct, or key in a map 'v:', where is the exact json formatted value of a list item 'i:', where is position of a item in a list 'k:', where is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set. The exact format is defined in sigs.k8s.io/structured-merge-diff

-

PreferredSchedulingTerm

-

An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op).

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
preferenceNodeSelectorTermA node selector term, associated with the corresponding weight.
weightintegerWeight associated with matching the corresponding nodeSelectorTerm, in the range 1-100.
-

NodeSelector

-

A node selector represents the union of the results of one or more label queries over a set of nodes; that is, it represents the OR of the selectors represented by the node selector terms.

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
nodeSelectorTermsArray<NodeSelectorTerm>Required. A list of node selector terms. The terms are ORed.
-

WeightedPodAffinityTerm

-

The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s)

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
podAffinityTermPodAffinityTermRequired. A pod affinity term, associated with the corresponding weight.
weightintegerweight associated with matching the corresponding podAffinityTerm, in the range 1-100.
-

PodAffinityTerm

-

Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
labelSelectorLabelSelectorA label query over a set of resources, in this case pods.
namespaceSelectorLabelSelectorA label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. This field is beta-level and is only honored when PodAffinityNamespaceSelector feature is enabled.
namespacesArray< string >namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace"
topologyKeystringThis pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed.
-

TypedLocalObjectReference

-

TypedLocalObjectReference contains enough information to let you locate the typed referenced object inside the same namespace.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
apiGroupstringAPIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required.
kindstringKind is the type of resource being referenced
namestringName is the name of resource being referenced
-

PersistentVolumeClaimCondition

-

PersistentVolumeClaimCondition contails details about state of pvc

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
lastProbeTimeTimeLast time we probed the condition.
lastTransitionTimeTimeLast time the condition transitioned from one status to another.
messagestringHuman-readable message indicating details about last transition.
reasonstringUnique, this should be a short, machine understandable string that gives the reason for condition's last transition. If it reports "ResizeStarted" that means the underlying persistent volume is being resized.
statusstringNo description available
typestringPossible enum values: - "FileSystemResizePending" - controller resize is finished and a file system resize is pending on node - "Resizing" - a user trigger resize of pvc has been started
-

KeyToPath

-

Maps a string key to a path within a volume.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
keystringThe key to project.
modeintegerOptional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
pathstringThe relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'.
-

DownwardAPIVolumeFile

-

DownwardAPIVolumeFile represents information to create the file containing the pod field

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
fieldRefObjectFieldSelectorRequired: Selects a field of the pod: only annotations, labels, name and namespace are supported.
modeintegerOptional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.
pathstringRequired: Path is the relative path name of the file to be created. Must not be absolute or contain the '..' path. Must be utf-8 encoded. The first item of the relative path must not start with '..'
resourceFieldRefResourceFieldSelectorSelects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.
-

PersistentVolumeClaimTemplate

-

PersistentVolumeClaimTemplate is used to produce PersistentVolumeClaim objects as part of an EphemeralVolumeSource.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
metadataObjectMetaMay contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation.
specPersistentVolumeClaimSpecThe specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here.
-

VolumeProjection

-

Projection that may be projected along with other supported volume types

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
configMapConfigMapProjectioninformation about the configMap data to project
downwardAPIDownwardAPIProjectioninformation about the downwardAPI data to project
secretSecretProjectioninformation about the secret data to project
serviceAccountTokenServiceAccountTokenProjectioninformation about the serviceAccountToken data to project
-

ObjectFieldSelector

-

ObjectFieldSelector selects an APIVersioned field of an object.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
apiVersionstringVersion of the schema the FieldPath is written in terms of, defaults to "v1".
fieldPathstringPath of the field to select in the specified API version.
-

ResourceFieldSelector

-

ResourceFieldSelector represents container resources (cpu, memory) and their output format

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
containerNamestringContainer name: required for volumes, optional for env vars
divisorQuantitySpecifies the output format of the exposed resources, defaults to "1"
resourcestringRequired: resource to select
-

HTTPHeader

-

HTTPHeader describes a custom header to be used in HTTP probes

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
namestringThe header field name
valuestringThe header field value
-

NodeSelectorTerm

-

A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm.

-

Fields

- - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
matchExpressionsArray<NodeSelectorRequirement>A list of node selector requirements by node's labels.
matchFieldsArray<NodeSelectorRequirement>A list of node selector requirements by node's fields.
-

ConfigMapProjection

-

Adapts a ConfigMap into a projected volume. The contents of the target ConfigMap's Data field will be presented in a projected volume as files using the keys in the Data field as the file names, unless the items element is populated with specific mappings of keys to paths. Note that this is identical to a configmap volume source without the default mode.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
itemsArray<KeyToPath>If unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
optionalbooleanSpecify whether the ConfigMap or its keys must be defined
-

DownwardAPIProjection

-

Represents downward API info for projecting into a projected volume. Note that this is identical to a downwardAPI volume source without the default mode.

-

Fields

- - - - - - - - - - - - - - - -
Field NameField TypeDescription
itemsArray<DownwardAPIVolumeFile>Items is a list of DownwardAPIVolume file
-

SecretProjection

-

Adapts a secret into a projected volume. The contents of the target Secret's Data field will be presented in a projected volume as files using the keys in the Data field as the file names. Note that this is identical to a secret volume source without the default mode.

-
-Examples with this field (click to open) -
- -
- -

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
itemsArray<KeyToPath>If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'.
namestringName of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
optionalbooleanSpecify whether the Secret or its key must be defined
-

ServiceAccountTokenProjection

-

ServiceAccountTokenProjection represents a projected service account token volume. This projection can be used to insert a service account token into the pods runtime filesystem for use against APIs (Kubernetes API Server or otherwise).

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
audiencestringAudience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver.
expirationSecondsintegerExpirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes.
pathstringPath is the path relative to the mount point of the file to project the token into.
-

NodeSelectorRequirement

-

A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values.

-

Fields

- - - - - - - - - - - - - - - - - - - - - - - - - -
Field NameField TypeDescription
keystringThe label key that the selector applies to.
operatorstringRepresents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. Possible enum values: - "DoesNotExist" - "Exists" - "Gt" - "In" - "Lt" - "NotIn"
valuesArray< string >An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch.
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/fields/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/high-availability/index.html b/high-availability/index.html index 8b74de7f027b..29c29a949ea3 100644 --- a/high-availability/index.html +++ b/high-availability/index.html @@ -1,3995 +1,11 @@ - - - + - - - - - - - - - - - - High-Availability (HA) - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + High-Availability (HA) - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

High-Availability (HA)

-

Workflow Controller

-

Before v3.0, only one controller could run at once. (If it crashed, Kubernetes would start another pod.)

-
-

v3.0

-
-

For many users, a short loss of workflow service may be acceptable - the new controller will just continue running -workflows if it restarts. However, with high service guarantees, new pods may take too long to start running workflows. -You should run two replicas, and one of which will be kept on hot-standby.

-

A voluntary pod disruption can cause both replicas to be replaced at the same time. You should use a Pod Disruption -Budget to prevent this and Pod Priority to recover faster from an involuntary pod disruption:

- -

Argo Server

-
-

v2.6

-
-

Run a minimum of two replicas, typically three, should be run, otherwise it may be possible that API and webhook requests are dropped.

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/high-availability/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/http-template/index.html b/http-template/index.html index a6418d8fc7f8..baaab42a32df 100644 --- a/http-template/index.html +++ b/http-template/index.html @@ -1,4003 +1,11 @@ - - - + - - - - - - - - - - - - HTTP Template - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + HTTP Template - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

HTTP Template

-
-

v3.2 and after

-
-

HTTP Template is a type of template which can execute HTTP Requests.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: http-template-
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      steps:
-        - - name: get-google-homepage
-            template: http
-            arguments:
-              parameters: [{name: url, value: "https://www.google.com"}]
-    - name: http
-      inputs:
-        parameters:
-          - name: url
-      http:
-        timeoutSeconds: 20 # Default 30
-        url: "{{inputs.parameters.url}}"
-        method: "GET" # Default GET
-        headers:
-          - name: "x-header-name"
-            value: "test-value"
-        # Template will succeed if evaluated to true, otherwise will fail
-        # Available variables:
-        #  request.body: string, the request body
-        #  request.headers: map[string][]string, the request headers
-        #  response.url: string, the request url
-        #  response.method: string, the request method
-        #  response.statusCode: int, the response status code
-        #  response.body: string, the response body
-        #  response.headers: map[string][]string, the response headers
-        successCondition: "response.body contains \"google\"" # available since v3.3
-        body: "test body" # Change request body
-
-

Argo Agent

-

HTTP Templates use the Argo Agent, which executes the requests independently of the controller. The Agent and the Workflow -Controller communicate through the WorkflowTaskSet CRD, which is created for each running Workflow that requires the use -of the Agent.

-

In order to use the Argo Agent, you will need to ensure that you have added the appropriate workflow RBAC to add an agent role with to Argo Workflows. An example agent role can be found in the quick-start manifests.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/http-template/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/ide-setup/index.html b/ide-setup/index.html index 029d238bc94a..f1332da42e87 100644 --- a/ide-setup/index.html +++ b/ide-setup/index.html @@ -1,4050 +1,11 @@ - - - + - - - - - - - - - - - - IDE Set-Up - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + IDE Set-Up - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

IDE Set-Up

-

Validating Argo YAML against the JSON Schema

-

Argo provides a JSON Schema that enables validation of YAML resources in your IDE.

-

JetBrains IDEs (Community & Ultimate Editions)

-

YAML validation is supported natively in IDEA.

-

Configure your IDE to reference the Argo schema and map it to your Argo YAML files:

-

JetBrains IDEs Configure Schema

-
    -
  • The schema is located here.
  • -
  • Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project!
  • -
  • Note that you may need to restart IDEA to pick up the changes.
  • -
-

That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete.

-

JetBrains IDEs Example Functionality

-

JetBrains IDEs (Community & Ultimate Editions) + Kubernetes Plugin

-

If you have the JetBrains Kubernetes Plugin -installed in your IDE, the validation can be configured in the Kubernetes plugin settings -instead of using the internal JSON schema file validator.

-

JetBrains IDEs Configure Schema with Kubernetes Plugin

-

Unlike the previous JSON schema validation method, the plugin detects the necessary validation -based on Kubernetes resource definition keys and does not require a file glob pattern. -Like the previously described method:

-
    -
  • The schema is located here.
  • -
  • Note that you may need to restart IDEA to pick up the changes.
  • -
-

VSCode

-

The Red Hat YAML plugin will provide error highlighting and auto-completion for Argo resources.

-

Install the Red Hat YAML plugin in VSCode and open extension settings:

-

VSCode Install Plugin

-

Open the YAML schema settings:

-

VSCode YAML Schema Settings

-

Add the Argo schema setting yaml.schemas:

-

VSCode Specify Argo Schema

-
    -
  • The schema is located here.
  • -
  • Specify a file glob pattern that locates your Argo files. The example glob here is for the Argo Github project!
  • -
  • Note that other defined schema with overlapping glob patterns may cause errors.
  • -
-

That's it. Open an Argo YAML file and you should see smarter behavior, including type errors and context-sensitive auto-complete.

-

VScode Example Functionality

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/ide-setup/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/index.html b/index.html index 2f194fd1ccb6..3cf82eb6df95 100644 --- a/index.html +++ b/index.html @@ -1,4254 +1,11 @@ - - - + - - - - - - - - - - - - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Argo Workflows

-

slack -CII Best Practices -Twitter Follow

-

What is Argo Workflows?

-

Argo Workflows is an open source container-native workflow engine for orchestrating parallel jobs on Kubernetes. Argo -Workflows is implemented as a Kubernetes CRD (Custom Resource Definition).

-
    -
  • Define workflows where each step in the workflow is a container.
  • -
  • Model multi-step workflows as a sequence of tasks or capture the dependencies between tasks using a directed acyclic - graph (DAG).
  • -
  • Easily run compute intensive jobs for machine learning or data processing in a fraction of the time using Argo - Workflows on Kubernetes.
  • -
-

Argo is a Cloud Native Computing Foundation (CNCF) graduated project.

-

Use Cases

- -

Why Argo Workflows?

-
    -
  • Argo Workflows is the most popular workflow execution engine for Kubernetes.
  • -
  • Light-weight, scalable, and easier to use.
  • -
  • Designed from the ground up for containers without the overhead and limitations of legacy VM and server-based - environments.
  • -
  • Cloud agnostic and can run on any Kubernetes cluster.
  • -
-

Read what people said in our latest survey

-

Try Argo Workflows

-

Access the demo environment (login using Github)

-

Screenshot

-

Who uses Argo Workflows?

-

About 200+ organizations are officially using Argo Workflows

-

Ecosystem

-

Just some of the projects that use or rely on Argo Workflows (complete list here):

- -

Client Libraries

-

Check out our Java, Golang and Python clients.

-

Quickstart

- -

Documentation

-

View the docs

-

Features

-

An incomplete list of features Argo Workflows provide:

-
    -
  • UI to visualize and manage Workflows
  • -
  • Artifact support (S3, Artifactory, Alibaba Cloud OSS, Azure Blob Storage, HTTP, Git, GCS, raw)
  • -
  • Workflow templating to store commonly used Workflows in the cluster
  • -
  • Archiving Workflows after executing for later access
  • -
  • Scheduled workflows using cron
  • -
  • Server interface with REST API (HTTP and GRPC)
  • -
  • DAG or Steps based declaration of workflows
  • -
  • Step level input & outputs (artifacts/parameters)
  • -
  • Loops
  • -
  • Parameterization
  • -
  • Conditionals
  • -
  • Timeouts (step & workflow level)
  • -
  • Retry (step & workflow level)
  • -
  • Resubmit (memoized)
  • -
  • Suspend & Resume
  • -
  • Cancellation
  • -
  • K8s resource orchestration
  • -
  • Exit Hooks (notifications, cleanup)
  • -
  • Garbage collection of completed workflow
  • -
  • Scheduling (affinity/tolerations/node selectors)
  • -
  • Volumes (ephemeral/existing)
  • -
  • Parallelism limits
  • -
  • Daemoned steps
  • -
  • DinD (docker-in-docker)
  • -
  • Script steps
  • -
  • Event emission
  • -
  • Prometheus metrics
  • -
  • Multiple executors
  • -
  • Multiple pod and workflow garbage collection strategies
  • -
  • Automatically calculated resource usage per step
  • -
  • Java/Golang/Python SDKs
  • -
  • Pod Disruption Budget support
  • -
  • Single-sign on (OAuth2/OIDC)
  • -
  • Webhook triggering
  • -
  • CLI
  • -
  • Out-of-the box and custom Prometheus metrics
  • -
  • Windows container support
  • -
  • Embedded widgets
  • -
  • Multiplex log viewer
  • -
-

Community Meetings

-

We host monthly community meetings where we and the community showcase demos and discuss the current and future state of -the project. Feel free to join us! For Community Meeting information, minutes and recordings -please see here.

-

Participation in the Argo Workflows project is governed by -the CNCF Code of Conduct

-

Community Blogs and Presentations

- -

Project Resources

- -

Security

-

See SECURITY.md.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/inline-templates/index.html b/inline-templates/index.html index bf4e35b12b5d..6deee727b579 100644 --- a/inline-templates/index.html +++ b/inline-templates/index.html @@ -1,3925 +1,11 @@ - - - + - - - - - - - - - - - - Inline Templates - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Inline Templates - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Inline Templates

-
-

v3.2 and after

-
-

You can inline other templates within DAG and steps.

-

Examples:

- -
-

Warning

-

You can only inline once. Inline a DAG within a DAG will not work.

-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/inline-templates/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/installation/index.html b/installation/index.html index 97bc1e59f736..129713d75a93 100644 --- a/installation/index.html +++ b/installation/index.html @@ -1,4090 +1,11 @@ - - - + - - - - - - - - - - - - Installation - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Installation - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
- -
-
- - -
-
- - - - - - - - -

Installation

-

Non-production installation

-

If you just want to try out Argo Workflows in a non-production environment (including on desktop via minikube/kind/k3d etc) follow the quick-start guide.

-

Production installation

-

Installation Methods

-

Official release manifests

-

To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred). Scroll down to the Controller and Server section and execute the kubectl commands.

-

You can use Kustomize to patch your preferred configurations on top of the base manifest.

-

⚠️ If you are using GitOps, never use Kustomize remote base: this is dangerous. Instead, copy the manifests into your Git repo.

-

⚠️ latest is tip, not stable. Never run it in production.

-

Argo Workflows Helm Chart

-

You can install Argo Workflows using the community maintained Helm charts.

-

Installation options

-

Determine your base installation option.

-
    -
  • A cluster install will watch and execute workflows in all namespaces. This is the default installation option when installing using the official release manifests.
  • -
  • A namespace install only executes workflows in the namespace it is installed in (typically argo). Look for namespace-install.yaml in the release assets.
  • -
  • A managed namespace install: only executes workflows in a separate namespace from the one it is installed in. See Managed Namespace for more details.
  • -
-

Additional installation considerations

-

Review the following:

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/installation/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/intermediate-inputs/index.html b/intermediate-inputs/index.html index 255132d93a03..8f0b860e626a 100644 --- a/intermediate-inputs/index.html +++ b/intermediate-inputs/index.html @@ -1,4138 +1,11 @@ - - - + - - - - - - - - - - - - Intermediate Parameters - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Intermediate Parameters - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Intermediate Parameters

-
-

v3.4 and after

-
-

Traditionally, Argo workflows has supported input parameters from UI only when the workflow starts, -and after that, it's pretty much on autopilot. But, there are a lot of use cases where human interaction is required.

-

This interaction is in the form of providing input text in the middle of the workflow, choosing from a dropdown of the options which a workflow step itself is intelligently generating.

-

A similar feature which you can see in jenkins is pipeline-input-step

-

Example use cases include:

-
    -
  • A human approval before doing something in production environment.
  • -
  • Programmatic generation of a list of inputs from which the user chooses. -Choosing from a list of available databases which the workflow itself is generating.
  • -
-

This feature is achieved via suspend template.

-

The workflow will pause at a Suspend node, and user will be able to update parameters using fields type text or dropdown.

-

Intermediate Parameters Approval Example

-
    -
  • The below example shows static enum values approval step.
  • -
  • The user will be able to choose between [YES, NO] which will be used in subsequent steps.
  • -
-

Approval Example Demo

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: intermediate-parameters-cicd-
-spec:
-  entrypoint: cicd-pipeline
-  templates:
-    - name: cicd-pipeline
-      steps:
-          - - name: deploy-pre-prod
-              template: deploy
-          - - name: approval
-              template: approval
-          - - name: deploy-prod
-              template: deploy
-              when: '{{steps.approval.outputs.parameters.approve}} == YES'
-    - name: approval
-      suspend: {}
-      inputs:
-          parameters:
-            - name: approve
-              default: 'NO'
-              enum:
-                  - 'YES'
-                  - 'NO'
-              description: >-
-                Choose YES to continue workflow and deploy to production
-      outputs:
-          parameters:
-            - name: approve
-              valueFrom:
-                  supplied: {}
-    - name: deploy
-      container:
-          image: 'argoproj/argosay:v2'
-          command:
-            - /argosay
-          args:
-            - echo
-            - deploying
-
-

Intermediate Parameters DB Schema Update Example

-
    -
  • The below example shows programmatic generation of enum values.
  • -
  • The generate-db-list template generates an output called db_list.
  • -
  • This output is of type json.
  • -
  • Since this json has a key called enum, with an array of options, the UI will parse this and display it as a dropdown.
  • -
  • The output can be any string also, in which case the UI will display it as a text field. Which the user can later edit.
  • -
-

DB Schema Update Example Demo

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: intermediate-parameters-db-
-spec:
-  entrypoint: db-schema-update
-  templates:
-      - name: db-schema-update
-        steps:
-          - - name: generate-db-list
-              template: generate-db-list
-          - - name: choose-db
-              template: choose-db
-              arguments:
-                parameters:
-                  - name: db_name
-                    value: '{{steps.generate-db-list.outputs.parameters.db_list}}'
-          - - name: update-schema
-              template: update-schema
-              arguments:
-                parameters:
-                  - name: db_name
-                    value: '{{steps.choose-db.outputs.parameters.db_name}}'
-      - name: generate-db-list
-        outputs:
-          parameters:
-            - name: db_list
-              valueFrom:
-                path: /tmp/db_list.txt
-        container:
-          name: main
-          image: 'argoproj/argosay:v2'
-          command:
-            - sh
-            - '-c'
-          args:
-            - >-
-              echo "{\"enum\": [\"db1\", \"db2\", \"db3\"]}" | tee /tmp/db_list.txt
-      - name: choose-db
-        inputs:
-          parameters:
-            - name: db_name
-              description: >-
-                Choose DB to update a schema
-        outputs:
-          parameters:
-            - name: db_name
-              valueFrom:
-                supplied: {}
-        suspend: {}
-      - name: update-schema
-        inputs:
-          parameters:
-            - name: db_name
-        container:
-          name: main
-          image: 'argoproj/argosay:v2'
-          command:
-            - sh
-            - '-c'
-          args:
-            - echo Updating DB {{inputs.parameters.db_name}}
-
-

Some Important Details

-
    -
  • The suspended node should have the SAME parameters defined in inputs.parameters and outputs.parameters.
  • -
  • All the output parameters in the suspended node should have valueFrom.supplied: {}
  • -
  • The selected values will be available at <SUSPENDED_NODE>.outputs.parameters.<PARAMETER_NAME>
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/intermediate-inputs/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/key-only-artifacts/index.html b/key-only-artifacts/index.html index 88d6e9130eea..5093c9aee318 100644 --- a/key-only-artifacts/index.html +++ b/key-only-artifacts/index.html @@ -1,3966 +1,11 @@ - - - + - - - - - - - - - - - - Key-Only Artifacts - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Key-Only Artifacts - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Key-Only Artifacts

-
-

v3.0 and after

-
-

A key-only artifact is an input or output artifact where you only specify the key, omitting the bucket, secrets etc. When these are omitted, the bucket/secrets from the configured artifact repository is used.

-

This allows you to move the configuration of the artifact repository out of the workflow specification.

-

This is closely related to artifact repository ref. You'll want to use them together for maximum benefit.

-

This should probably be your default if you're using v3.0:

-
    -
  • Reduces the size of workflows (improved performance).
  • -
  • User owned artifact repository set-up configuration (simplified management).
  • -
  • Decouples the artifact location configuration from the workflow. Allowing you to re-configure the artifact repository without changing your workflows or templates.
  • -
-

Example:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: key-only-artifacts-
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      dag:
-        tasks:
-          - name: generate
-            template: generate
-          - name: consume
-            template: consume
-            dependencies:
-              - generate
-    - name: generate
-      container:
-        image: argoproj/argosay:v2
-        args: [ echo, hello, /mnt/file ]
-      outputs:
-        artifacts:
-          - name: file
-            path: /mnt/file
-            s3:
-              key: my-file
-    - name: consume
-      container:
-        image: argoproj/argosay:v2
-        args: [cat, /tmp/file]
-      inputs:
-        artifacts:
-          - name: file
-            path: /tmp/file
-            s3:
-              key: my-file
-
-
-

Warning

-

The location data is not longer stored in /status/nodes. Any tooling that relies on this will need to be updated.

-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/key-only-artifacts/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/kubectl/index.html b/kubectl/index.html index 9c038643093e..fa20bcd79a36 100644 --- a/kubectl/index.html +++ b/kubectl/index.html @@ -1,3918 +1,11 @@ - - - + - - - - - - - - - - - - kubectl - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + kubectl - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

kubectl

-

You can also create Workflows directly with kubectl. -However, the Argo CLI offers extra features that kubectl does not, such as YAML validation, workflow visualization, parameter passing, retries and resubmits, suspend and resume, and more.

-
kubectl create -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml
-kubectl get wf -n argo
-kubectl get wf hello-world-xxx -n argo
-kubectl get po -n argo --selector=workflows.argoproj.io/workflow=hello-world-xxx
-kubectl logs hello-world-yyy -c main -n argo
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/kubectl/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/lifecyclehook/index.html b/lifecyclehook/index.html index c9883e567c3c..2c39096dd2ea 100644 --- a/lifecyclehook/index.html +++ b/lifecyclehook/index.html @@ -1,4058 +1,11 @@ - - - + - - - - - - - - - - - - Lifecycle-Hook - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Lifecycle-Hook - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Lifecycle-Hook

-
-

v3.3 and after

-
-

Introduction

-

A LifecycleHook triggers an action based on a conditional expression or on completion of a step or template. It is configured either at the workflow-level or template-level, for instance as a function of the workflow.status or steps.status, respectively. A LifecycleHook executes during execution time and executes once. It will execute in parallel to its step or template once the expression is satisfied.

-

In other words, a LifecycleHook functions like an exit handler with a conditional expression. You must not name a LifecycleHook exit or it becomes an exit handler; otherwise the hook name has no relevance.

-

Workflow-level LifecycleHook: Executes the template when a configured expression is met during the workflow.

- -

Template-level Lifecycle-Hook: Executes the template when a configured expression is met during the step in which it is defined.

- -

Supported conditions

- -

Unsupported conditions

-
    -
  • outputs are not usable since LifecycleHook executes during execution time and outputs are not produced until the step is completed. You can use outputs from previous steps, just not the one you're hooking into. If you'd like to use outputs create an exit handler instead - all the status variable are available there so you can still conditionally decide what to do.
  • -
-

Notification use case

-

A LifecycleHook can be used to configure a notification depending on a workflow status change or template status change, like the example below:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
- generateName: lifecycle-hook-
-spec:
- entrypoint: main
- hooks:
-   exit:
-     template: http
-   running:
-     expression: workflow.status == "Running"
-     template: http
- templates:
-   - name: main
-     steps:
-       - - name: step1
-           template: heads
-
-   - name: heads
-     container:
-       image: alpine:3.6
-       command: [sh, -c]
-       args: ["echo \"it was heads\""]
-
-   - name: http
-     http:
-       url: http://dummy.restapiexample.com/api/v1/employees
-
-
-

Put differently, an exit handler is like a workflow-level LifecycleHook with an expression of workflow.status == "Succeeded" or workflow.status == "Failed" or workflow.status == "Error".

-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/lifecyclehook/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/links/index.html b/links/index.html index d10baf7221cc..0b7923035586 100644 --- a/links/index.html +++ b/links/index.html @@ -1,3947 +1,11 @@ - - - + - - - - - - - - - - - - Links - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Links - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Links

-
-

v2.7 and after

-
-

You can configure Argo Server to show custom links:

-
    -
  • A "Get Help" button in the bottom right of the window linking to you organization help pages or chat room.
  • -
  • Deep-links to your facilities (e.g. logging facility) in the UI for both the workflow and each workflow pod.
  • -
  • Adds a button to the top of workflow view to navigate to customized views.
  • -
-

Links can contain placeholder variables. Placeholder variables are indicated by the dollar sign and curly braces: ${variable}.

-

These are the commonly used variables:

-
    -
  • ${metadata.namespace}: Kubernetes namespace of the current workflow / pod / event source / sensor
  • -
  • ${metadata.name}: Name of the current workflow / pod / event source / sensor
  • -
  • ${status.startedAt}: Start time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z
  • -
  • ${status.finishedAt}: End time-stamp of the workflow / pod, in the format of 2021-01-01T10:35:56Z. If the workflow/pod is still running, this variable will be null
  • -
-

See workflow-controller-configmap.yaml for a complete example

-
-

v3.1 and after

-
-

Epoch time-stamps are available now. These are useful if we want to add links to logging facilities like Grafana -or DataDog, as they support Unix epoch time-stamp formats as URL -parameters:

-
    -
  • ${status.startedAtEpoch}: Start time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds, e.g. 1609497000000.
  • -
  • ${status.finishedAtEpoch}: End time-stamp of the workflow/pod, in the Unix epoch time format in milliseconds, e.g. 1609497000000. If the workflow/pod is still running, this variable will represent the current time.
  • -
-
-

v3.1 and after

-
-

In addition to the above variables, we can now access all workflow fields under ${workflow}.

-

For example, one may find it useful to define a custom label in the workflow and access it by ${workflow.metadata.labels.custom_label_name}

-

We can also access workflow fields in a pod link. For example, ${workflow.metadata.name} returns -the name of the workflow instead of the name of the pod. If the field doesn't exist on the workflow then the value will be an empty string.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/links/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/managed-namespace/index.html b/managed-namespace/index.html index fc6e47a2055a..1c8ea45f76f7 100644 --- a/managed-namespace/index.html +++ b/managed-namespace/index.html @@ -1,3937 +1,11 @@ - - - + - - - - - - - - - - - - Managed Namespace - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Managed Namespace - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Managed Namespace

-
-

v2.5 and after

-
-

You can install Argo in either namespace scoped or cluster scoped configurations. -The main difference is whether you install Roles or ClusterRoles, respectively.

-

In namespace scoped configuration, you must run both the Workflow Controller and Argo Server using --namespaced. -If you want to run workflows in a separate namespace, add --managed-namespace as well. -(In cluster scoped configuration, don't include --namespaced or --managed-namespace.)

-

For example:

-
      - args:
-        - --configmap
-        - workflow-controller-configmap
-        - --executor-image
-        - argoproj/workflow-controller:v2.5.1
-        - --namespaced
-        - --managed-namespace
-        - default
-
-

Please note that both cluster scoped and namespace scoped configurations require "admin" roles to install because Argo's Custom Resource Definitions (CRDs) must be created (CRDs are cluster scoped objects).

-
-

Example Use Case

-

You can use a managed namespace install if you want some users or services to run Workflows without granting them privileges in the namespace where Argo Workflows is installed. -For example, if you only run CI/CD Workflows that are maintained by the same team that manages the Argo Workflows installation, you may want a namespace install. -But if all the Workflows are run by a separate data science team, you may want to give them a "data-science-workflows" namespace and use a managed namespace install of Argo Workflows in another namespace.

-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/managed-namespace/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/manually-create-secrets/index.html b/manually-create-secrets/index.html index 7a8e21731576..78fd6809e29f 100644 --- a/manually-create-secrets/index.html +++ b/manually-create-secrets/index.html @@ -1,3997 +1,11 @@ - - - + - - - - - - - - - - - - Service Account Secrets - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Service Account Secrets - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Service Account Secrets

-

As of Kubernetes v1.24, secrets are no longer automatically created for service accounts.

-

You must create a secret manually.

-

You must also make the secret discoverable. -You have two options:

-

Option 1 - Discovery By Name

-

Name your secret ${serviceAccountName}.service-account-token:

-
apiVersion: v1
-kind: Secret
-metadata:
-  name: default.service-account-token
-  annotations:
-    kubernetes.io/service-account.name: default
-type: kubernetes.io/service-account-token
-
-

This option is simpler than option 2, as you can create the secret and make it discoverable by name at the same time.

-

Option 2 - Discovery By Annotation

-

Annotate the service account with the secret name:

-
apiVersion: v1
-kind: ServiceAccount
-metadata:
-  name: default
-  annotations:
-    workflows.argoproj.io/service-account-token.name: my-token
-
-

This option is useful when the secret already exists, or the service account has a very long name.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/manually-create-secrets/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/memoization/index.html b/memoization/index.html index fa6f21e6df12..33dc81a6586a 100644 --- a/memoization/index.html +++ b/memoization/index.html @@ -1,4051 +1,11 @@ - - - + - - - - - - - - - - - - Step Level Memoization - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Step Level Memoization - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Step Level Memoization

-
-

v2.10 and after

-
-

Introduction

-

Workflows often have outputs that are expensive to compute. -Memoization reduces cost and workflow execution time by recording the result of previously run steps: -it stores the outputs of a template into a specified cache with a variable key.

-

Prior to version 3.5 memoization only works for steps which have outputs, if you attempt to use it on steps which do not it should not work (there are some cases where it does, but they shouldn't). It was designed for 'pure' steps, where the purpose of running the step is to calculate some outputs based upon the step's inputs, and only the inputs. Pure steps should not interact with the outside world, but workflows won't enforce this on you.

-

If you are using workflows prior to version 3.5 you should look at the work avoidance technique instead of memoization if your steps don't have outputs.

-

In version 3.5 or later all steps can be memoized, whether or not they have outputs.

-

Cache Method

-

Currently, the cached data is stored in config-maps. -This allows you to easily manipulate cache entries manually through kubectl and the Kubernetes API without having to go through Argo. -All cache config-maps must have the label workflows.argoproj.io/configmap-type: Cache to be used as a cache. This prevents accidental access to other important config-maps in the system

-

Using Memoization

-

Memoization is set at the template level. You must specify a key, which can be static strings but more often depend on inputs. -You must also specify a name for the config-map cache. -Optionally you can set a maxAge in seconds or hours (e.g. 180s, 24h) to define how long should it be considered valid. If an entry is older than the maxAge, it will be ignored.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-   generateName: memoized-workflow-
-spec:
-   entrypoint: whalesay
-   templates:
-      - name: whalesay
-        memoize:
-           key: "{{inputs.parameters.message}}"
-           maxAge: "10s"
-           cache:
-              configMap:
-                 name: whalesay-cache
-
-

Find a simple example for memoization here.

-
-

Note

-

In order to use memoization it is necessary to add the verbs create and update to the configmaps resource for the appropriate (cluster) roles. In the case of a cluster install the argo-cluster-role cluster role should be updated, whilst for a namespace install the argo-role role should be updated.

-
-

FAQ

-
    -
  1. If you see errors like error creating cache entry: ConfigMap \"reuse-task\" is invalid: []: Too long: must have at most 1048576 characters, - this is due to the 1MB limit placed on the size of ConfigMap. - Here are a couple of ways that might help resolve this:
      -
    • Delete the existing ConfigMap cache or switch to use a different cache.
    • -
    • Reduce the size of the output parameters for the nodes that are being memoized.
    • -
    • Split your cache into different memoization keys and cache names so that each cache entry is small.
    • -
    -
  2. -
  3. My step isn't getting memoized, why not? - If you are running workflows <3.5 ensure that you have specified at least one output on the step.
  4. -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/memoization/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/metrics/index.html b/metrics/index.html index 414ac2c79f13..c343a3e51ab0 100644 --- a/metrics/index.html +++ b/metrics/index.html @@ -1,4619 +1,11 @@ - - - + - - - - - - - - - - - - Prometheus Metrics - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Prometheus Metrics - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Prometheus Metrics

-
-

v2.7 and after

-
-

Introduction

-

Argo emits a certain number of controller metrics that inform on the state of the controller at any given time. Furthermore, -users can also define their own custom metrics to inform on the state of their Workflows.

-

Custom Prometheus metrics can be defined to be emitted on a Workflow- and Template-level basis. These can be useful -for many cases; some examples:

-
    -
  • Keeping track of the duration of a Workflow or Template over time, and setting an alert if it goes beyond a threshold
  • -
  • Keeping track of the number of times a Workflow or Template fails over time
  • -
  • Reporting an important internal metric, such as a model training score or an internal error rate
  • -
-

Emitting custom metrics with Argo is easy, but it's important to understand what makes a good Prometheus metric and the -best way to define metrics in Argo to avoid problems such as cardinality explosion.

-

Metrics and metrics in Argo

-

There are two kinds of metrics emitted by Argo: controller metrics and custom metrics.

-

Controller metrics

-

Metrics that inform on the state of the controller; i.e., they answer the question "What is the state of the controller right now?" -Default controller metrics can be scraped from service workflow-controller-metrics at the endpoint <host>:9090/metrics

-

Custom metrics

-

Metrics that inform on the state of a Workflow, or a series of Workflows. These custom metrics are defined by the user in the Workflow spec.

-

Emitting custom metrics is the responsibility of the emitter owner. Since the user defines Workflows in Argo, the user is responsible -for emitting metrics correctly.

-

What is and isn't a Prometheus metric

-

Prometheus metrics should be thought of as ephemeral data points of running processes; i.e., they are the answer to -the question "What is the state of my system right now?". Metrics should report things such as:

-
    -
  • a counter of the number of times a workflow or steps has failed, or
  • -
  • a gauge of workflow duration, or
  • -
  • an average of an internal metric such as a model training score or error rate.
  • -
-

Metrics are then routinely scraped and stored and -- when they are correctly designed -- they can represent time series. -Aggregating the examples above over time could answer useful questions such as:

-
    -
  • How has the error rate of this workflow or step changed over time?
  • -
  • How has the duration of this workflow changed over time? Is the current workflow running for too long?
  • -
  • Is our model improving over time?
  • -
-

Prometheus metrics should not be thought of as a store of data. Since metrics should only report the state of the system -at the current time, they should not be used to report historical data such as:

-
    -
  • the status of an individual instance of a workflow, or
  • -
  • how long a particular instance of a step took to run.
  • -
-

Metrics are also ephemeral, meaning there is no guarantee that they will be persisted for any amount of time. If you need -a way to view and analyze historical data, consider the workflow archive or reporting to logs.

-

Default Controller Metrics

-

Metrics for the Four Golden Signals are:

-
    -
  • Latency: argo_workflows_queue_latency
  • -
  • Traffic: argo_workflows_count and argo_workflows_queue_depth_count
  • -
  • Errors: argo_workflows_count and argo_workflows_error_count
  • -
  • Saturation: argo_workflows_workers_busy and argo_workflows_workflow_condition
  • -
- - -

argo_pod_missing

-

Pods were not seen. E.g. by being deleted by Kubernetes. You should only see this under high load.

-
-

Note

-

This metric's name starts with argo_ not argo_workflows_.

-
-

argo_workflows_count

-

Number of workflow in each phase. The Running count does not mean that a workflows pods are running, just that the controller has scheduled them. A workflow can be stuck in Running with pending pods for a long time.

-

argo_workflows_error_count

-

A count of certain errors incurred by the controller.

-

argo_workflows_k8s_request_total

-

Number of API requests sent to the Kubernetes API.

-

argo_workflows_operation_duration_seconds

-

A histogram of durations of operations. An operation is a single workflow reconciliation loop within the workflow-controller. It's the time for the controller to process a single workflow after it has been read from the cluster and is a measure of the performance of the controller affected by the complexity of the workflow.

-

argo_workflows_pods_count

-

It is possible for a workflow to start, but no pods be running (e.g. cluster is too busy to run them). This metric sheds light on actual work being done.

-

argo_workflows_queue_adds_count

-

The number of additions to the queue of workflows or cron workflows.

-

argo_workflows_queue_depth_count

-

The depth of the queue of workflows or cron workflows to be processed by the controller.

-

argo_workflows_queue_latency

-

The time workflows or cron workflows spend in the queue waiting to be processed.

-

argo_workflows_workers_busy

-

The number of workers that are busy.

-

argo_workflows_workflow_condition

-

The number of workflow with different conditions. This will tell you the number of workflows with running pods.

-

argo_workflows_workflows_processed_count

-

A count of all Workflow updates processed by the controller.

-

Metric types

-

Please see the Prometheus docs on metric types.

-

How metrics work in Argo

-

In order to analyze the behavior of a workflow over time, we need to be able to link different instances -(i.e. individual executions) of a workflow together into a "series" for the purposes of emitting metrics. We do so by linking them together -with the same metric descriptor.

-

In Prometheus, a metric descriptor is defined as a metric's name and its key-value labels. For example, for a metric -tracking the duration of model execution over time, a metric descriptor could be:

-

argo_workflows_model_exec_time{model_name="model_a",phase="validation"}

-

This metric then represents the amount of time that "Model A" took to train in the phase "Validation". It is important -to understand that the metric name and its labels form the descriptor: argo_workflows_model_exec_time{model_name="model_b",phase="validation"} -is a different metric (and will track a different "series" altogether).

-

Now, whenever we run our first workflow that validates "Model A" a metric with the amount of time it took it to do so will -be created and emitted. For each subsequent time that this happens, no new metrics will be emitted and the same metric -will be updated with the new value. Since, in effect, we are interested on the execution time of "validation" of "Model A" -over time, we are no longer interested in the previous metric and can assume it has already been scraped.

-

In summary, whenever you want to track a particular metric over time, you should use the same metric name and metric -labels wherever it is emitted. This is how these metrics are "linked" as belonging to the same series.

-

Grafana Dashboard for Argo Controller Metrics

-

Please see the Argo Workflows metrics Grafana dashboard.

-

Defining metrics

-

Metrics are defined in-place on the Workflow/Step/Task where they are emitted from. Metrics are always processed after -the Workflow/Step/Task completes, with the exception of real-time metrics.

-

Metric definitions must include a name and a help doc string. They can also include any number of labels (when -defining labels avoid cardinality explosion). Metrics with the same name must always use the same exact help string, -having different metrics with the same name, but with a different help string will cause an error (this is a Prometheus requirement).

-

All metrics can also be conditionally emitted by defining a when clause. This when clause works the same as elsewhere -in a workflow.

-

A metric must also have a type, it can be one of gauge, histogram, and counter (see below). Within -the metric type a value must be specified. This value can be either a literal value of be an Argo variable.

-

When defining a histogram, buckets must also be provided (see below).

-

Argo variables can be included anywhere in the metric spec, such as in labels, name, help, when, etc.

-

Metric names can only contain alphanumeric characters, _, and :.

-

Metric Spec

-

In Argo you can define a metric on the Workflow level or on the Template level. Here is an example of a Workflow -level Gauge metric that will report the Workflow duration time:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: model-training-
-spec:
-  entrypoint: steps
-  metrics:
-    prometheus:
-      - name: exec_duration_gauge         # Metric name (will be prepended with "argo_workflows_")
-        labels:                           # Labels are optional. Avoid cardinality explosion.
-          - key: name
-            value: model_a
-        help: "Duration gauge by name"    # A help doc describing your metric. This is required.
-        gauge:                            # The metric type. Available are "gauge", "histogram", and "counter".
-          value: "{{workflow.duration}}"  # The value of your metric. It could be an Argo variable (see variables doc) or a literal value
-
-...
-
-

An example of a Template-level Counter metric that will increase a counter every time the step fails:

-
...
-  templates:
-    - name: flakey
-      metrics:
-        prometheus:
-          - name: result_counter
-            help: "Count of step execution by result status"
-            labels:
-              - key: name
-                value: flakey
-            when: "{{status}} == Failed"       # Emit the metric conditionally. Works the same as normal "when"
-            counter:
-              value: "1"                            # This increments the counter by 1
-      container:
-        image: python:alpine3.6
-        command: ["python", -c]
-        # fail with a 66% probability
-        args: ["import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)"]
-...
-
-

A similar example of such a Counter metric that will increase for every step status

-
...
-  templates:
-    - name: flakey
-      metrics:
-        prometheus:
-          - name: result_counter
-            help: "Count of step execution by result status"
-            labels:
-              - key: name
-                value: flakey
-              - key: status
-                value: "{{status}}"    # Argo variable in `labels`
-            counter:
-              value: "1"
-      container:
-        image: python:alpine3.6
-        command: ["python", -c]
-        # fail with a 66% probability
-        args: ["import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)"]
-...
-
-

Finally, an example of a Template-level Histogram metric that tracks an internal value:

-
...
-  templates:
-    - name: random-int
-      metrics:
-        prometheus:
-          - name: random_int_step_histogram
-            help: "Value of the int emitted by random-int at step level"
-            when: "{{status}} == Succeeded"    # Only emit metric when step succeeds
-            histogram:
-              buckets:                              # Bins must be defined for histogram metrics
-                - 2.01                              # and are part of the metric descriptor.
-                - 4.01                              # All metrics in this series MUST have the
-                - 6.01                              # same buckets.
-                - 8.01
-                - 10.01
-              value: "{{outputs.parameters.rand-int-value}}"         # References itself for its output (see variables doc)
-      outputs:
-        parameters:
-          - name: rand-int-value
-            globalName: rand-int-value
-            valueFrom:
-              path: /tmp/rand_int.txt
-      container:
-        image: alpine:latest
-        command: [sh, -c]
-        args: ["RAND_INT=$((1 + RANDOM % 10)); echo $RAND_INT; echo $RAND_INT > /tmp/rand_int.txt"]
-...
-
-

Real-Time Metrics

-

Argo supports a limited number of real-time metrics. These metrics are emitted in real-time, beginning when the step execution starts -and ending when it completes. Real-time metrics are only available on Gauge type metrics and with a limited number of variables.

-

To define a real-time metric simply add realtime: true to a gauge metric with a valid real-time variable. For example:

-
  gauge:
-    realtime: true
-    value: "{{duration}}"
-
-

Metrics endpoint

-

By default, metrics are emitted by the workflow-controller on port 9090 on the /metrics path. By port-forwarding to the pod you can view the metrics in your browser at http://localhost:9090/metrics:

-

kubectl -n argo port-forward deploy/workflow-controller 9090:9090

-

A metrics service is not installed as part of the default installation so you will need to add one if you wish to use a Prometheus Service Monitor:

-
cat <<EOF | kubectl apply -f -
-apiVersion: v1
-kind: Service
-metadata:
-  labels:
-    app: workflow-controller
-  name: workflow-controller-metrics
-  namespace: argo
-spec:
-  ports:
-  - name: metrics
-    port: 9090
-    protocol: TCP
-    targetPort: 9090
-  selector:
-    app: workflow-controller
----
-apiVersion: monitoring.coreos.com/v1
-kind: ServiceMonitor
-metadata:
-  name: argo-workflows
-  namespace: argo
-spec:
-  endpoints:
-  - port: metrics
-  selector:
-    matchLabels:
-      app: workflow-controller
-EOF
-
-

If you have more than one controller pod, using one as a hot-standby, you should use a headless service to ensure that each pod is being scraped so that no metrics are missed.

-

Metrics configuration

-

You can adjust various elements of the metrics configuration by changing values in the Workflow Controller Config Map.

-
metricsConfig: |
-  # Enabled controls metric emission. Default is true, set "enabled: false" to turn off
-  enabled: true
-
-  # Path is the path where metrics are emitted. Must start with a "/". Default is "/metrics"
-  path: /metrics
-
-  # Port is the port where metrics are emitted. Default is "9090"
-  port: 8080
-
-  # MetricsTTL sets how often custom metrics are cleared from memory. Default is "0", metrics are never cleared
-  metricsTTL: "10m"
-
-  # IgnoreErrors is a flag that instructs prometheus to ignore metric emission errors. Default is "false"
-  ignoreErrors: false
-
-  # Use a self-signed cert for TLS, default false
-  secure: false
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/metrics/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/node-field-selector/index.html b/node-field-selector/index.html index 54ca26251e15..af35db61d6d6 100644 --- a/node-field-selector/index.html +++ b/node-field-selector/index.html @@ -1,4079 +1,11 @@ - - - + - - - - - - - - - - - - Node Field Selectors - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Node Field Selectors - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Node Field Selectors

-
-

v2.8 and after

-
-

Introduction

-

The resume, stop and retry Argo CLI and API commands support a --node-field-selector parameter to allow the user to select a subset of nodes for the command to apply to.

-

In the case of the resume and stop commands these are the nodes that should be resumed or stopped.

-

In the case of the retry command it allows specifying nodes that should be restarted even if they were previously successful (and must be used in combination with --restart-successful)

-

The format of this when used with the CLI is:

-
--node-field-selector=FIELD=VALUE
-
-

Possible options

-

The field can be any of:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
FieldDescription
displayNameDisplay name of the node. This is the name of the node as it is displayed on the CLI or UI, without considering its ancestors (see example below). This is a useful shortcut if there is only one node with the same displayName
nameFull name of the node. This is the full name of the node, including its ancestors (see example below). Using name is necessary when two or more nodes share the same displayName and disambiguation is required.
templateNameTemplate name of the node
phasePhase status of the node - e.g. Running
templateRef.nameThe name of the workflow template the node is referring to
templateRef.templateThe template within the workflow template the node is referring to
inputs.parameters.<NAME>.valueThe value of input parameter NAME
-

The operator can be '=' or '!='. Multiple selectors can be combined with a comma, in which case they are anded together.

-

Examples

-

To filter for nodes where the input parameter 'foo' is equal to 'bar':

-
--node-field-selector=inputs.parameters.foo.value=bar
-
-

To filter for nodes where the input parameter 'foo' is equal to 'bar' and phase is not running:

-
--node-field-selector=foo1=bar1,phase!=Running
-
-

Consider the following workflow:

-
 ● appr-promotion-ffsv4    code-release
- ├─✔ start                 sample-template/email                 appr-promotion-ffsv4-3704914002  2s
- ├─● app1                  wftempl1/approval-and-promotion
- │ ├─✔ notification-email  sample-template/email                 appr-promotion-ffsv4-524476380   2s
- │ └─ǁ wait-approval       sample-template/waiting-for-approval
- ├─✔ app2                  wftempl2/promotion
- │ ├─✔ notification-email  sample-template/email                 appr-promotion-ffsv4-2580536603  2s
- │ ├─✔ pr-approval         sample-template/approval              appr-promotion-ffsv4-3445567645  2s
- │ └─✔ deployment          sample-template/promote               appr-promotion-ffsv4-970728982   1s
- └─● app3                  wftempl1/approval-and-promotion
-   ├─✔ notification-email  sample-template/email                 appr-promotion-ffsv4-388318034   2s
-   └─ǁ wait-approval       sample-template/waiting-for-approval
-
-

Here we have two steps with the same displayName: wait-approval. To select one to suspend, we need to use their -name, either appr-promotion-ffsv4.app1.wait-approval or appr-promotion-ffsv4.app3.wait-approval. If it is not clear -what the full name of a node is, it can be found using kubectl:

-
$ kubectl get wf appr-promotion-ffsv4 -o yaml
-
-...
-    appr-promotion-ffsv4-3235686597:
-      boundaryID: appr-promotion-ffsv4-3079407832
-      displayName: wait-approval                        # <- Display Name
-      finishedAt: null
-      id: appr-promotion-ffsv4-3235686597
-      name: appr-promotion-ffsv4.app1.wait-approval     # <- Full Name
-      phase: Running
-      startedAt: "2021-01-20T17:00:25Z"
-      templateRef:
-        name: sample-template
-        template: waiting-for-approval
-      templateScope: namespaced/wftempl1
-      type: Suspend
-...
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/node-field-selector/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/offloading-large-workflows/index.html b/offloading-large-workflows/index.html index 8888b934a42c..16657643caaf 100644 --- a/offloading-large-workflows/index.html +++ b/offloading-large-workflows/index.html @@ -1,4032 +1,11 @@ - - - + - - - - - - - - - - - - Offloading Large Workflows - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Offloading Large Workflows - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Offloading Large Workflows

-
-

v2.4 and after

-
-

Argo stores workflows as Kubernetes resources (i.e. within EtcD). This creates a limit to their size as resources must be under 1MB. Each resource includes the status of each node, which is stored in the /status/nodes field for the resource. This can be over 1MB. If this happens, we try and compress the node status and store it in /status/compressedNodes. If the status is still too large, we then try and store it in an SQL database.

-

To enable this feature, configure a Postgres or MySQL database under persistence in your configuration and set nodeStatusOffLoad: true.

-

FAQ

-

Why aren't my workflows appearing in the database?

-

Offloading is expensive and often unnecessary, so we only offload when we need to. Your workflows aren't probably large enough.

-

Error Failed to submit workflow: etcdserver: request is too large.

-

You must use the Argo CLI having exported export ARGO_SERVER=....

-

Error offload node status is not supported

-

Even after compressing node statuses, the workflow exceeded the EtcD -size limit. To resolve, either enable node status offload as described -above or look for ways to reduce the size of your workflow manifest:

-
    -
  • Use withItems or withParams to consolidate similar templates into a single parametrized template
  • -
  • Use template defaults to factor shared template options to the workflow level
  • -
  • Use workflow templates to factor frequently-used templates into separate resources
  • -
  • Use workflows of workflows to factor a large workflow into a workflow of smaller workflows
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/offloading-large-workflows/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/plugin-directory/index.html b/plugin-directory/index.html index 76476742f1aa..6eb43f4ff184 100644 --- a/plugin-directory/index.html +++ b/plugin-directory/index.html @@ -1,3967 +1,11 @@ - - - + - - - - - - - - - - - - Plugin Directory - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Plugin Directory - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Plugin Directory

-

⚠️ Disclaimer: We take only minimal action to verify the authenticity of plugins. Install at your own risk.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescription
HelloHello world plugin you can use as a template
SlackExample Slack plugin
Argo CDSync Argo CD apps, e.g. to use Argo as CI
Volcano Job PluginExecute Volcano Job
PythonPlugin for executing Python
HermesSend notifications, e.g. Slack
WASMRun Web Assembly (WASM) tasks
Chaos Mesh PluginRun Chaos Mesh experiment
Pull Request Build StatusSend build status of pull request to Git provider
Atomic Workflow PluginStop the workflows which comes from the same WorkflowTemplate and have the same parameters
AWS PluginArgo Workflows Executor Plugin for AWS Services, e.g. SageMaker Pipelines, Glue, etc.
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/plugin-directory/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/plugins/index.html b/plugins/index.html index b07893dc0d27..c7ce7ab52236 100644 --- a/plugins/index.html +++ b/plugins/index.html @@ -1,3922 +1,11 @@ - - - + - - - - - - - - - - - - Plugins - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Plugins - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Plugins

-

Plugins

-

Plugins allow you to extend Argo Workflows to add new capabilities.

-
    -
  • You don't need to learn Golang, you can write in any language, including Python.
  • -
  • Simple: a plugin just responds to RPC HTTP requests.
  • -
  • You can iterate quickly by changing the plugin at runtime.
  • -
  • You can get your plugin running today, no need to wait 3-5 months for review, approval, merge and an Argo software - release.
  • -
-

Executor plugins can be written and installed by both users and admins.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/plugins/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/progress/index.html b/progress/index.html index 52487711ccb9..58edee79e5f5 100644 --- a/progress/index.html +++ b/progress/index.html @@ -1,4018 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Progress - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Progress - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Progress

-
-

v2.12 and after

-
-

When you run a workflow, the controller will report on its progress.

-

We define progress as two numbers, N/M such that 0 <= N <= M and 0 <= M.

-
    -
  • N is the number of completed tasks.
  • -
  • M is the total number of tasks.
  • -
-

E.g. 0/0, 0/1 or 50/100.

-

Unlike estimated duration, progress is deterministic. I.e. it will be the same for each workflow, regardless of any problems.

-

Progress for each node is calculated as follows:

-
    -
  1. For a pod node either 1/1 if completed or 0/1 otherwise.
  2. -
  3. For non-leaf nodes, the sum of its children.
  4. -
-

For a whole workflow's, progress is the sum of all its leaf nodes.

-
-

Warning

-

M will increase during workflow run each time a node is added to the graph.

-
-

Self reporting progress

-
-

v3.3 and after

-
-

Pods in a workflow can report their own progress during their runtime. This self reported progress overrides the -auto-generated progress.

-

Reporting progress works as follows:

-
    -
  • create and write the progress to a file indicated by the env variable ARGO_PROGRESS_FILE
  • -
  • format of the progress must be N/M
  • -
-

The executor will read this file every 3s and if there was an update, -patch the pod annotations with workflows.argoproj.io/progress: N/M. -The controller picks this up and writes the progress to the appropriate Status properties.

-

Initially the progress of a workflows' pod is always 0/1. If you want to influence this, make sure to set an initial -progress annotation on the pod:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: progress-
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      dag:
-        tasks:
-          - name: progress
-            template: progress
-    - name: progress
-      metadata:
-        annotations:
-          workflows.argoproj.io/progress: 0/100
-      container:
-        image: alpine:3.14
-        command: [ "/bin/sh", "-c" ]
-        args:
-          - |
-            for i in `seq 1 10`; do sleep 10; echo "$(($i*10))"'/100' > $ARGO_PROGRESS_FILE; done
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/progress/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/proposals/artifact-gc-proposal/index.html b/proposals/artifact-gc-proposal/index.html index d231506fb8d7..473224b3a932 100644 --- a/proposals/artifact-gc-proposal/index.html +++ b/proposals/artifact-gc-proposal/index.html @@ -1,4023 +1,11 @@ - - - + - - - - - - - - - - - - Proposal for Artifact Garbage Collection - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Proposal for Artifact Garbage Collection - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
- -
- - -
-
- - - - - - - - -

Proposal for Artifact Garbage Collection

-

Introduction

-

The motivation for this is to enable users to automatically have certain Artifacts specified to be automatically garbage collected.

-

Artifacts can be specified for Garbage Collection at different stages: OnWorkflowCompletion, OnWorkflowDeletion, OnWorkflowSuccess, OnWorkflowFailure, or Never

-

Proposal Specifics

-

Workflow Spec changes

-
    -
  1. WorkflowSpec has an ArtifactGC structure, which consists of an ArtifactGCStrategy, as well as the optional designation of a ServiceAccount and Pod metadata (labels and annotations) to be used by the Pod doing the deletion. The ArtifactGCStrategy can be set to OnWorkflowCompletion, OnWorkflowDeletion, OnWorkflowSuccess, OnWorkflowFailure, or Never
  2. -
  3. Artifact has an ArtifactGC section which can be used to override the Workflow level.
  4. -
-

Workflow Status changes

-
    -
  1. Artifact has a boolean Deleted flag
  2. -
  3. WorkflowStatus.Conditions can be set to ArtifactGCError
  4. -
  5. WorkflowStatus can include a new field ArtGCStatus which holds additional information to keep track of the state of Artifact Garbage Collection.
  6. -
-

How it will work

-

For each ArtifactGCStrategy the Controller will execute one Pod that runs in the user's namespace and deletes all artifacts pertaining to that strategy.

-

Option 2 Flow

-

Since OnWorkflowSuccess happens at the same time as OnWorkflowCompletion and OnWorkflowFailure also happens at the same time as OnWorkflowCompletion, we can consider consolidating these GC Strategies together.

-

We will have a new CRD type called ArtifactGCTask and use one or more of them to specify the Artifacts which the GC Pod will read and then write Status to (note individual artifacts have individual statuses). The Controller will read the Status and reflect that in the Workflow Status. The Controller will deem the ArtifactGCTasks ready to read once the Pod has completed (in success or failure).

-

Once the GC Pod has completed and the Workflow status has been persisted, assuming the Pod completed with Success, the Controller can delete the ArtifactGCTasks, which will cause the GC Pod to also get deleted as it will be "owned" by the ArtifactGCTasks.

-

The Workflow will have a Finalizer on it to prevent it from being deleted until Artifact GC has occurred. Once all deletions for all GC Strategies have occurred, the Controller will remove the Finalizer.

-

Failures

-

If a deletion fails, the Pod will retry a few times through exponential back off. Note: it will not be considered a failure if the key does not exist - the principal of idempotence will allow this (i.e. if a Pod were to get evicted and then re-run it should be okay if some artifacts were previously deleted).

-

Once it retries a few times, if it didn't succeed, it will end in a "Failed" state. The user will manually need to delete the ArtifactGCTasks (which will delete the GC Pod), and remove the Finalizer on the Workflow.

-

The Failure will be reflected in both the Workflow Conditions as well as as a Kubernetes Event (and the Artifacts that failed will have "Deleted"=false).

-

Alternatives Considered

-

For reference, these slides were presented to the Argo Contributor meeting on 7/12/22 which go through some of the alternative options that were weighed. These alternatives are explained below:

-

One Pod Per Artifact

-

The POC that was done, which uses just one Pod to delete each Artifact, was considered as an alternative for MVP (Option 1 from the slides).

-

This option has these benefits:

-
    -
  • simpler in that the Pod doesn't require any additional Object to report status (e.g. ArtifactGCTask) because it simply succeeds or fails based on its exit code (whereas in Option 2 the Pod needs to report individual failure statuses for each artifact)
  • -
  • could have a very minimal Service Account which provides access to just that one artifact's location
  • -
-

and these drawbacks:

-
    -
  • deletion is slower when performed by multiple Pods
  • -
  • a Workflow with thousands of artifacts causes thousands of Pods to get executed, which could overwhelm kube-scheduler and kube-apiserver.
  • -
  • if we delay the Artifact GC Pods by giving them a lower priority than the Workflow Pods, users will not get their artifacts deleted when they expect and may log bugs
  • -
-

Summarizing ADR statement: -"In the context of Artifact Garbage Collection, facing whether to use a separate Pod for every artifact or not, we decided not to, to achieve faster garbage collection and reduced load on K8S, accepting that we will require a new CRD type."

-

Service Account/IAM roles

-

We considered some alternatives for how to specify Service Account and/or Annotations, which are applied to give the GC Pod access (slide 12). We will have them specify this information in a new ArtifactGC section of the spec that lives on the Workflow level but can be overridden on the Artifact level (option 3 from slide). Another option considered was to just allow specification on the Workflow level (option 2 from slide) so as to reduce the complexity of the code and reduce the potential number of Pods running, but Option 3 was selected in the end to maximize flexibility.

-

Summarizing ADR statement: -"In the context of Artifact Garbage Collection, facing the question of how users should specify Service Account and annotations, we decided to give them the option to specify them on the Workflow level and/or override them on the Artifact level, to maximize flexibility for user needs, accepting that the code will be more complicated, and sometimes there will be many Pods running."

-

MVP vs post-MVP

-

We will start with just S3.

-

We can also make other determinations if it makes sense to postpone some parts for after MVP.

-

Workflow Spec Validation

-

We can reject the Workflow during validation if ArtifactGC is configured along with a non-supported storage engine (for now probably anything besides S3).

-

Documentation

-

Need to clarify certain things in our documentation:

-
    -
  1. Users need to know that if they don't name their artifacts with unique keys, they risk the same key being deleted by one Workflow and created by another at the same time. One recommendation is to parametrize the key, e.g. {{workflow.uid}}/hello.txt.
  2. -
  3. Requirement to specify Service Account or Annotation for ArtifactGC specifically if they are needed (we won't fall back to default Workflow SA/annotations). Also, the Service Account needs to either be bound to the "agent" role or otherwise allow the same access to ArtifactGCTasks.
  4. -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/proposals/artifact-gc-proposal/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/proposals/cron-wf-improvement-proposal/index.html b/proposals/cron-wf-improvement-proposal/index.html index 82842dddf540..c67f018c5562 100644 --- a/proposals/cron-wf-improvement-proposal/index.html +++ b/proposals/cron-wf-improvement-proposal/index.html @@ -1,4009 +1,11 @@ - - - + - - - - - - - - - - - - Proposal for Cron Workflows improvements - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Proposal for Cron Workflows improvements - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Proposal for Cron Workflows improvements

-

Introduction

-

Currently, CronWorkflows are a great resource if we want to run recurring tasks to infinity. However, it is missing the ability to customize it, for example define how many times a workflow should run or how to handle multiple failures. I believe argo workflows would benefit of having more configuration options for cron workflows, to allow to change its behavior based on the result of its child’s success or failures. Below I present my thoughts on how we could improve them, but also some questions and concerns on how to properly do it.

-

Proposal

-

This proposal discusses the viability of adding 2 more fields into the cron workflow configuration:

-
RunStrategy:
- maxSuccess:
- maxFailures:
-
-

maxSuccess - defines how many child workflows must have success before suspending the workflow schedule

-

maxFailures - defines how many child workflows must fail before suspending the workflow scheduling. This may contain Failed workflows, Errored workflows or spec errors.

-

For example, if we want to run a workflow just once, we could just set:

-
RunStrategy:
- maxSuccess: 1
-
-

This configuration will make sure the controller will keep scheduling workflows until one of them finishes with success.

-

As another example, if we want to stop scheduling workflows when they keep failing, we could configure the CronWorkflow with:

-
RunStrategy:
- maxFailures: 2
-
-

This config will stop scheduling workflows if fails twice.

-

Total vs consecutive

-

One aspect that needs to be discussed is whether these configurations apply to the entire life of a cron Workflow or just in consecutive schedules. For example, if we configure a workflow to stop scheduling after 2 failures, I think it makes sense to have this applied when it fails twice consecutively. Otherwise, we can have 2 outages in different periods which will suspend the workflow. On the other hand, when configuring a workflow to run twice with success, it would make more sense to have it execute with success regardless of whether it is a consecutive success or not. If we have an outage after the first workflow succeeds, which translates into failed workflows, it should need to execute with success only once. So I think it would make sense to have:

-
    -
  • -

    maxFailures - maximum number of consecutive failures before stopping the scheduling of a workflow

    -
  • -
  • -

    maxSuccess - maximum number of workflows with success.

    -
  • -
-

How to store state

-

Since we need to control how many child workflows had success/failure we must store state. With this some questions arise:

-
    -
  • -

    Should we just store it through the lifetime of the controller or should we store it to a database?

    -
      -
    • Probably only makes sense if we can backup the state somewhere (like a BD). However, I don't have enough knowledge about workflow's architecture to tell how good of an idea this is.
    • -
    • -

      If a CronWorkflow gets re-applied, does it maintain or reset the number of success/failures?

      -
    • -
    • -

      I guess it should reset since a configuration change should be seen as a new start.

      -
    • -
    -
  • -
-

How to stop the workflow

-

Once the configured number of failures or successes is reached, it is necessary to stop the workflow scheduling. -I believe we have 3 options:

-
    -
  • Delete the workflow: In my opinion, this is the worst option and goes against gitops principles.
  • -
  • Suspend it (set suspend=true): the workflow spec is changed to have the workflow suspended. I may be wrong but this conflicts with gitops as well.
  • -
  • Stop scheduling it: The workflow spec is the same. The controller needs to check if the max number of runs was already attained and skip scheduling if it did.
  • -
-

Option 3 seems to be the only possibility. After reaching the max configured executions, the cron workflow would exist forever but never scheduled. Maybe we could add a new status field, like Inactive and have something the UI to show it?

-

How to handle suspended workflows

-

One possible case that comes to mind is a long outage where all workflows are failing. For example, imagine a workflow that needs to download a file from some storage and for some reason that storage is down. Workflows will keep getting scheduled but they are going to fail. If they fail the number of configured maxFailures, the workflows gets stopped forever. Once the storage is back up, how can the user enable the workflow again?

-
    -
  • Manually re-create the workflow: could be an issue if the user has multiple cron workflows
  • -
  • Instead of stopping the workflow scheduling, introduce a back-off period as suggested by #7291. Or maybe allow both configurations.
  • -
-

I believe option 2 would allow the user to select if they want to stop scheduling or not. If they do, when cron workflows are wrongfully halted, they will need to manually start them again. If they don't, Argo will only introduce a back-off period between schedules to avoid rescheduling workflows that are just going to fail. Spec could look something like:

-
RunStrategy:
- maxSuccess:
- maxFailures:
-  value: # this would be optional
-  back-off:
-   enabled: true
-   factor: 2
-
-

With this configuration the user could configure 3 behaviors:

-
    -
  1. set value if they wanted to stop scheduling a workflow after a maximum number of consecutive failures.
  2. -
  3. set value and back-off if they wanted to stop scheduling a workflow after a maximum number of consecutive failures but with a back-off period between each failure
  4. -
  5. set back-off if they want a back-off period between each failure but they never want to stop the workflow scheduling.
  6. -
-

Wrap up

-

I believe this feature would enhance the cron workflows to allow more specific use cases that are commonly requested by the community, such as running a workflow only once. This proposal raises some concerns on how to properly implement it and I would like to know the maintainers/contributors opinion on these 4 topics, but also some other issues that I couldn't think of.

-

Resources

-
    -
  • This discussion was prompted by #10620
  • -
  • A first approach to this problem was discussed in 5659
  • -
  • A draft PR to implement the first approach #5662
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/proposals/cron-wf-improvement-proposal/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/proposals/makefile-improvement-proposal/index.html b/proposals/makefile-improvement-proposal/index.html index cd51762483bf..75c2a8a19d8d 100644 --- a/proposals/makefile-improvement-proposal/index.html +++ b/proposals/makefile-improvement-proposal/index.html @@ -1,4035 +1,11 @@ - - - + - - - - - - - - - - - - Proposal for Makefile improvements - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Proposal for Makefile improvements - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Proposal for Makefile improvements

-

Introduction

-

The motivation for this proposal is to enable developers working on Argo Workflows to use build tools in a more reproducible way. -Currently the Makefile is unfortunately too opinionated and as a result is often a blocker when first setting up Argo Workflows locally. -I believe we should shrink the responsibilities of the Makefile and where possible outsource areas of responsibility to more specialized technology, such -as Devenv/Nix in the case of dependency management.

-

Proposal Specifics

-

In order to better address reproducibility, it is better to split up the duties the Makefile currently performs into various sub components, that can be assembled in more appropriate technology. One important aspect here is to completely shift the responsibility of dependency management away from the Makefile and into technology such as Nix or Devenv. This proposal will also enable quicker access to a development build of Argo Workflows to developers, reducing the costs of on-boarding and barrier to entry.

-

Devenv

-

Benefits of Devenv

-
    -
  • Reproducible build environment
  • -
  • Ability to run processes
  • -
-

Disadvantages of Devenv

-
    -
  • Huge learning curve to tap into Nix functionality
  • -
  • Less documentation
  • -
-

Nix

-

Benefits of Nix

-
    -
  • Reproducible build environment
  • -
  • Direct raw control of various Nix related functionality instead of using Devenv
  • -
  • More documentation
  • -
-

Disadvantages of Nix

-
    -
  • Huge learning curve
  • -
-

Recommendation

-

I suggest that we use Nix over Devenv. I believe that our build environment is unique enough that we will be tapping into Nix anyway, it probably makes sense to directly use Nix in that case.

-

Proposal

-

In order to maximize the benefit we receive from using something like Nix, I suggest that we initially start off with a modest change to the Makefile. -The first proposal would be to remove out all dependency management code and replace this functionality with Nix, where it is trivially possible. This may not be possible for some go lang related binaries we use, we will retain the Makefile functionality in those cases, at least for a while. Eventually we will migrate more and more of this responsibility away from the Makefile. Following Nix being responsible for all dependency management, we could start to consider moving more of our build system itself into Nix, perhaps it is easiest to start off with UI build as it is relatively painless. However, do note that this is not a requirement, I do not see a problem with the Makefile and the Nix file co-existing, it is more about finding a good balance between the reproducibility we desire and the effort we put into obtaining said reproducibility. An example for a replacement could be this dependency for example, note that we do not state any version here, replacing such installations with Nix based installations will ensure that we can ensure that if a build works on a certain developer's machine, it should also work on every other machine as well.

-

What will Nix get us?

-

As mentioned previously Nix gets us closer to reproducible build environments. It should ease significantly the on-boarding process of developers onto the project. -There have been several developers who wanted to work on Argo Workflows but found the Makefile to be a barrier, it is likely that there are more developers on this boat. With a reproducible build environment, we hope that -everyone who would like to contribute to the project is able to do so easily. It should also save time for engineers on-boarding onto the project, especially if they are using a system that is not Ubuntu or OSX.

-

What will Nix cost us?

-

If we proceed further with Nix, it will require some amount of people working on Argo Workflows to learn it, this is not a trivial task by any means. -It will increase the barrier when it comes to changes that are build related, however, this isn't necessarily bad as build related changes should be far less frequent, the friction we will endure here is likely manageable.

-

How will developers use nix?

-

In the case that both Nix and the Makefile co-exist, we could use nix inside the Makefile itself. The Makefile calls into Nix to setup a developer environment with all dependencies, it will then continue the rest of the Makefile execution as normal. -Following a complete or near complete migration to Nix, we can use nix-build for more of our tasks. An example of a C++ project environment is provided here

-

Resources

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/proposals/makefile-improvement-proposal/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/public-api/index.html b/public-api/index.html index f5f3e21803af..ecd56822daca 100644 --- a/public-api/index.html +++ b/public-api/index.html @@ -1,3916 +1,11 @@ - - - + - - - - - - - - - - - - Public API - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Public API - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Public API

-

Argo Workflows public API is defined by the following:

-
    -
  • The file api/openapi-spec/swagger.json
  • -
  • The schema of the table argo_archived_workflows.
  • -
  • The installation options.
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/public-api/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/quick-start/index.html b/quick-start/index.html index d631fed2e4d3..8e0f88a9bcab 100644 --- a/quick-start/index.html +++ b/quick-start/index.html @@ -1,4142 +1,11 @@ - - - + - - - - - - - - - - - - Quick Start - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Quick Start - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Quick Start

-

To see how Argo Workflows work, you can install it and run examples of simple workflows.

-

Before you start you need a Kubernetes cluster and kubectl set up to be able to access that cluster. For the purposes of getting up and running, a local cluster is fine. You could consider the following local Kubernetes cluster options:

- -

Alternatively, if you want to try out Argo Workflows and don't want to set up a Kubernetes cluster, try the Killercoda course.

-
-

Development vs. Production

-

These instructions are intended to help you get started quickly. They are not suitable in production. For production installs, please refer to the installation documentation.

-
-

Install Argo Workflows

-

To install Argo Workflows, navigate to the releases page and find the release you wish to use (the latest full release is preferred).

-

Scroll down to the Controller and Server section and execute the kubectl commands.

-

Below is an example of the install commands, ensure that you update the command to install the correct version number:

-
kubectl create namespace argo
-kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v<<ARGO_WORKFLOWS_VERSION>>/install.yaml
-
-

Patch argo-server authentication

-

The argo-server (and thus the UI) defaults to client authentication, which requires clients to provide their Kubernetes bearer token in order to authenticate. For more information, refer to the Argo Server Auth Mode documentation. We will switch the authentication mode to server so that we can bypass the UI login for now:

-
kubectl patch deployment \
-  argo-server \
-  --namespace argo \
-  --type='json' \
-  -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": [
-  "server",
-  "--auth-mode=server"
-]}]'
-
-

Port-forward the UI

-

Open a port-forward so you can access the UI:

-
kubectl -n argo port-forward deployment/argo-server 2746:2746
-
-

This will serve the UI on https://localhost:2746. Due to the self-signed certificate, you will receive a TLS error which you will need to manually approve.

-
-

Pay close attention to the URI. It uses https and not http. Navigating to http://localhost:2746 result in server-side error that breaks the port-forwarding.

-
-

Install the Argo Workflows CLI

-

You can more easily interact with Argo Workflows with the Argo CLI.

-

Submitting an example workflow

-

Submit an example workflow (CLI)

-
argo submit -n argo --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-world.yaml
-
-

The --watch flag used above will allow you to observe the workflow as it runs and the status of whether it succeeds. -When the workflow completes, the watch on the workflow will stop.

-

You can list all the Workflows you have submitted by running the command below:

-
argo list -n argo
-
-

You will notice the Workflow name has a hello-world- prefix followed by random characters. These characters are used -to give Workflows unique names to help identify specific runs of a Workflow. If you submitted this Workflow again, -the next Workflow run would have a different name.

-

Using the argo get command, you can always review details of a Workflow run. The output for the command below will -be the same as the information shown as when you submitted the Workflow:

-
argo get -n argo @latest
-
-

The @latest argument to the CLI is a short cut to view the latest Workflow run that was executed.

-

You can also observe the logs of the Workflow run by running the following:

-
argo logs -n argo @latest
-
-

Submit an example workflow (GUI)

-
    -
  • Open a port-forward so you can access the UI:
  • -
-
kubectl -n argo port-forward deployment/argo-server 2746:2746
-
-
    -
  • -

    Navigate your browser to https://localhost:2746.

    -
  • -
  • -

    Click + Submit New Workflow and then Edit using full workflow options

    -
  • -
  • -

    You can find an example workflow already in the text field. Press + Create to start the workflow.

    -
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/quick-start/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/releases/index.html b/releases/index.html index 283254a8f84a..c6ec097e77ee 100644 --- a/releases/index.html +++ b/releases/index.html @@ -1,4160 +1,11 @@ - - - + - - - - - - - - - - - - Releases - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Releases - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Releases

-

You can find the most recent version under Github release.

-

Versioning

-

Versions are expressed as x.y.z, where x is the major version, y is the minor version, and z is the patch version, -following Semantic Versioning terminology.

-

Argo Workflows does not use Semantic Versioning. Minor versions may contain breaking changes. Patch versions only -contain bug fixes and minor features.

-

For stable, use the latest patch version.

-

⚠️ Read the upgrading guide to find out about breaking changes before any upgrade.

-

Supported Versions

-

We maintain release branches for the most recent two minor releases.

-

Fixes may be back-ported to release branches, depending on severity, risk, and, feasibility.

-

Breaking changes will be documented in upgrading guide.

-

Supported Version Skew

-

Both the argo-server and argocli should be the same version as the controller.

-

Release Cycle

-

New minor versions are released roughly every 6 months.

-

Release candidates (RCs) for major and minor releases are typically available for 4-6 weeks before the release becomes generally available (GA). Features may be shipped in subsequent release candidates.

-

When features are shipped in a new release candidate, the most recent release candidate will be available for at least 2 weeks to ensure it is tested sufficiently before it is pushed to GA. If bugs are found with a feature and are not resolved within the 2 week period, the features will be rolled back so as to be saved for the next major/minor release timeline, and a new release candidate will be cut for testing before pushing to GA.

-

Otherwise, we typically release every two weeks:

-
    -
  • Patch fixes for the current stable version.
  • -
  • The next release candidate, if we are currently in a release-cycle.
  • -
-

Kubernetes Compatibility Matrix

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Argo Workflows \ Kubernetes1.171.181.191.201.211.221.231.241.251.261.27
3.5xxx?????
3.4xxx?
3.3????????
3.2????????
3.1????????
-
    -
  • Fully supported versions.
  • -
  • ? Due to breaking changes might not work. Also, we haven't thoroughly tested against this version.
  • -
  • Unsupported versions.
  • -
-

Notes on Compatibility

-

Argo versions may be compatible with newer and older versions than what it is listed but only three minor versions are supported per Argo release unless otherwise noted.

-

The main branch of Argo Workflows is currently tested on Kubernetes 1.27.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/releases/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/releasing/index.html b/releasing/index.html index 499e1f0bfcf1..7e36a9b5035e 100644 --- a/releasing/index.html +++ b/releasing/index.html @@ -1,4026 +1,11 @@ - - - + - - - - - - - - - - - - Release Instructions - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Release Instructions - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Release Instructions

-

Cherry-Picking Fixes

-

✋ Before you start, make sure you have created a release branch (e.g. release-3.3) and it's passing CI.

-

Then get a list of commits you may want to cherry-pick:

-
./hack/cherry-pick.sh release-3.3 "fix" true
-./hack/cherry-pick.sh release-3.3 "chore(deps)" true
-./hack/cherry-pick.sh release-3.3 "build" true
-./hack/cherry-pick.sh release-3.3 "ci" true
-
-

To automatically cherry-pick, run the following:

-
./hack/cherry-pick.sh release-3.3 "fix" false
-
-

Then look for "failed to cherry-pick" in the log to find commits that fail to be cherry-picked and decide if a -manual patch is necessary.

-

Ignore:

-
    -
  • Fixes for features only on main.
  • -
  • Dependency upgrades, unless they fix known security issues.
  • -
  • Build or CI improvements, unless the release pipeline is blocked without them.
  • -
-

Cherry-pick the first commit. Run make test locally before pushing. If the build timeouts the build caches may have -gone, try re-running.

-

Don't cherry-pick another commit until the CI passes. It is harder to find the cause of a new failed build if the last -build failed too.

-

Cherry-picking commits one-by-one and then waiting for the CI will take a long time. Instead, cherry-pick each commit then -run make test locally before pushing.

-

Publish Release

-

✋ Before you start, make sure the branch is passing CI.

-

Push a new tag to the release branch. E.g.:

-
git tag v3.3.4
-git push upstream v3.3.4 # or origin if you do not use upstream
-
-

GitHub Actions will automatically build and publish your release. This takes about 1h. Set your self a reminder to check -this was successful.

-

Update Changelog

-

Once the tag is published, GitHub Actions will automatically open a PR to update the changelog. Once the PR is ready, -you can approve it, enable auto-merge, and then run the following to force trigger the CI build:

-
git branch -D create-pull-request/changelog
-git fetch upstream
-git checkout --track upstream/create-pull-request/changelog
-git commit -s --allow-empty -m "docs: Force trigger CI"
-git push upstream create-pull-request/changelog
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/releasing/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/resource-duration/index.html b/resource-duration/index.html index 445848bd7e34..eca4a099775d 100644 --- a/resource-duration/index.html +++ b/resource-duration/index.html @@ -1,4082 +1,11 @@ - - - + - - - - - - - - - - - - Resource Duration - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Resource Duration - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Resource Duration

-
-

v2.7 and after

-
-

Argo Workflows provides an indication of how much resource your workflow has used and saves this -information. This is intended to be an indicative but not accurate value.

-

Calculation

-

The calculation is always an estimate, and is calculated by duration.go -based on container duration, specified pod resource requests, limits, or (for memory and CPU) -defaults.

-

Each indicator is divided by a common denominator depending on resource type.

-

Base Amounts

-

Each resource type has a denominator used to make large values smaller.

-
    -
  • CPU: 1
  • -
  • Memory: 100Mi
  • -
  • Storage: 10Gi
  • -
  • Ephemeral Storage: 10Gi
  • -
  • All others: 1
  • -
-

The requested fraction of the base amount will be multiplied by the container's run time to get -the container's Resource Duration.

-

For example, if you've requested 50Mi of memory (half of the base amount), and the container -runs 120sec, then the reported Resource Duration will be 60sec * (100Mi memory).

-

Request Defaults

-

If requests are not set for a container, Kubernetes defaults to limits. If limits are not set, -Argo falls back to 100m for CPU and 100Mi for memory.

-

Note: these are Argo's defaults, not Kubernetes' defaults. For the most meaningful results, -set requests and/or limits for all containers.

-

Example

-

A pod that runs for 3min, with a CPU limit of 2000m, a memory limit of 1Gi and an nvidia.com/gpu -resource limit of 1:

-
CPU:    3min * 2000m / 1000m = 6min * (1 cpu)
-Memory: 3min * 1Gi / 100Mi   = 30min * (100Mi memory)
-GPU:    3min * 1     / 1     = 3min * (1 nvidia.com/gpu)
-
-

Web/CLI reporting

-

Both the web and CLI give abbreviated usage, like 9m10s*cpu,6s*memory,2m31s*nvidia.com/gpu. In -this context, resources like memory refer to the "base amounts".

-

For example, memory means "amount of time a resource requested 100Mi of memory." If a container only -uses 10Mi, each second it runs will only count as a tenth-second of memory.

-

Rounding Down

-

For a short running pods (<10s), if the memory request is also small (for example, 10Mi), then the memory value may be 0s. This is because the denominator is 100Mi.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/resource-duration/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/resource-template/index.html b/resource-template/index.html index 4ae22ce5c1bb..f9197a681e36 100644 --- a/resource-template/index.html +++ b/resource-template/index.html @@ -1,3916 +1,11 @@ - - - + - - - - - - - - - - - - Resource Template - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Resource Template - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
- -
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/resource-template/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/rest-api/index.html b/rest-api/index.html index b44a53e33def..b1572c605cc5 100644 --- a/rest-api/index.html +++ b/rest-api/index.html @@ -1,3975 +1,11 @@ - - - + - - - - - - - - - - - - REST API - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + REST API - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

REST API

-

Argo Server API

-
-

v2.5 and after

-
-

Argo Workflows ships with a server that provides more features and security than before.

-

The server can be configured with or without client auth (server --auth-mode client). When it is disabled, then clients must pass their KUBECONFIG base 64 encoded in the HTTP Authorization header:

-
ARGO_TOKEN=$(argo auth token)
-curl -H "Authorization: $ARGO_TOKEN" https://localhost:2746/api/v1/workflows/argo
-
- -

API reference docs :

- - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/rest-api/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/rest-examples/index.html b/rest-examples/index.html index 68ae4d5861ec..e80ac15b2457 100644 --- a/rest-examples/index.html +++ b/rest-examples/index.html @@ -1,4064 +1,11 @@ - - - + - - - - - - - - - - - - API Examples - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + API Examples - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

API Examples

-

Document contains couple of examples of workflow JSON's to submit via argo-server REST API.

-
-

v2.5 and after

-
-

Assuming

-
    -
  • the namespace of argo-server is argo
  • -
  • authentication is turned off (otherwise provide Authorization header)
  • -
  • argo-server is available on localhost:2746
  • -
-

Submitting workflow

-
curl --request POST \
-  --url https://localhost:2746/api/v1/workflows/argo \
-  --header 'content-type: application/json' \
-  --data '{
-  "namespace": "argo",
-  "serverDryRun": false,
-  "workflow": {
-      "metadata": {
-        "generateName": "hello-world-",
-        "namespace": "argo",
-        "labels": {
-          "workflows.argoproj.io/completed": "false"
-         }
-      },
-     "spec": {
-       "templates": [
-        {
-         "name": "whalesay",
-         "arguments": {},
-         "inputs": {},
-         "outputs": {},
-         "metadata": {},
-         "container": {
-          "name": "",
-          "image": "docker/whalesay:latest",
-          "command": [
-            "cowsay"
-          ],
-          "args": [
-            "hello world"
-          ],
-          "resources": {}
-        }
-      }
-    ],
-    "entrypoint": "whalesay",
-    "arguments": {}
-  }
-}
-}'
-
-

Getting workflows for namespace argo

-
curl --request GET \
-  --url https://localhost:2746/api/v1/workflows/argo
-
-

Getting single workflow for namespace argo

-
curl --request GET \
-  --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt
-
-

Deleting single workflow for namespace argo

-
curl --request DELETE \
-  --url https://localhost:2746/api/v1/workflows/argo/abc-dthgt
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/rest-examples/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/retries/index.html b/retries/index.html index cdf3346f97f3..4a5e37fea3fc 100644 --- a/retries/index.html +++ b/retries/index.html @@ -1,4086 +1,11 @@ - - - + - - - - - - - - - - - - Retries - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Retries - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Retries

-

Argo Workflows offers a range of options for retrying failed steps.

-

Configuring retryStrategy in WorkflowSpec

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: retry-container-
-spec:
-  entrypoint: retry-container
-  templates:
-  - name: retry-container
-    retryStrategy:
-      limit: "10"
-    container:
-      image: python:alpine3.6
-      command: ["python", -c]
-      # fail with a 66% probability
-      args: ["import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)"]
-
-

The retryPolicy and expression are re-evaluated after each attempt. For example, if you set retryPolicy: OnFailure and your first attempt produces a failure then a retry will be attempted. If the second attempt produces an error, then another attempt will not be made.

-

Retry policies

-

Use retryPolicy to choose which failure types to retry:

-
    -
  • Always: Retry all failed steps
  • -
  • OnFailure: Retry steps whose main container is marked as failed in Kubernetes
  • -
  • OnError: Retry steps that encounter Argo controller errors, or whose init or wait containers fail
  • -
  • OnTransientError: Retry steps that encounter errors defined as transient, or errors matching the TRANSIENT_ERROR_PATTERN environment variable. Available in version 3.0 and later.
  • -
-

The retryPolicy applies even if you also specify an expression, but in version 3.5 or later the default policy means the expression makes the decision unless you explicitly specify a policy.

-

The default retryPolicy is OnFailure, except in version 3.5 or later when an expression is also supplied, when it is Always. This may be easier to understand in this diagram.

-
flowchart LR
-  start([Will a retry be attempted])
-  start --> policy
-  policy(Policy Specified?)
-  policy-->|No|expressionNoPolicy
-  policy-->|Yes|policyGiven
-  policyGiven(Expression Specified?)
-  policyGiven-->|No|policyGivenApplies
-  policyGiven-->|Yes|policyAndExpression
-  policyGivenApplies(Supplied Policy)
-  policyAndExpression(Supplied Policy AND Expression)
-  expressionNoPolicy(Expression specified?)
-  expressionNoPolicy-->|No|onfailureNoExpr
-  expressionNoPolicy-->|Yes|version
-  onfailureNoExpr[OnFailure]
-  onfailure[OnFailure AND Expression]
-  version(Workflows version)
-  version-->|3.4 or ealier|onfailure
-  always[Only Expression matters]
-  version-->|3.5 or later|always
-
-

An example retry strategy:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: retry-on-error-
-spec:
-  entrypoint: error-container
-  templates:
-  - name: error-container
-    retryStrategy:
-      limit: "2"
-      retryPolicy: "Always"
-    container:
-      image: python
-      command: ["python", "-c"]
-      # fail with a 80% probability
-      args: ["import random; import sys; exit_code = random.choice(range(0, 5)); sys.exit(exit_code)"]
-
-

Conditional retries

-
-

v3.2 and after

-
-

You can also use expression to control retries. The expression field -accepts an expr expression and has -access to the following variables:

-
    -
  • lastRetry.exitCode: The exit code of the last retry, or "-1" if not available
  • -
  • lastRetry.status: The phase of the last retry: Error, Failed
  • -
  • lastRetry.duration: The duration of the last retry, in seconds
  • -
  • lastRetry.message: The message output from the last retry (available from version 3.5)
  • -
-

If expression evaluates to false, the step will not be retried.

-

The expression result will be logical and with the retryPolicy. Both must be true to retry.

-

See example for usage.

-

Back-Off

-

You can configure the delay between retries with backoff. See example for usage.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/retries/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/roadmap/index.html b/roadmap/index.html index aaf1b68ea2b3..249e5feef129 100644 --- a/roadmap/index.html +++ b/roadmap/index.html @@ -1,3894 +1,11 @@ - - - + - - - - - - - - - - - - Roadmap - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Roadmap - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Roadmap

-

The roadmap is currently being revamped. If you want to join the discussions, please join our contributors meeting.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/roadmap/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/running-at-massive-scale/index.html b/running-at-massive-scale/index.html index 15d0edd4d892..1e8d62ade998 100644 --- a/running-at-massive-scale/index.html +++ b/running-at-massive-scale/index.html @@ -1,4040 +1,11 @@ - - - + - - - - - - - - - - - - Running At Massive Scale - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Running At Massive Scale - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
- -
- - -
-
- - - - - - - - -

Running At Massive Scale

-

Argo Workflows is an incredibly scalable tool for orchestrating workflows. It empowers you to process thousands of workflows per day, with each workflow consisting of tens of thousands of nodes. Moreover, it effortlessly handles hundreds of thousands of smaller workflows daily. However, optimizing your setup is crucial to fully leverage this capability.

-

Run The Latest Version

-

You must be running at least v3.1 for several recommendations to work. Upgrade to the very latest patch. Performance -fixes often come in patches.

-

Test Your Cluster Before You Install Argo Workflows

-

You'll need a big cluster, with a big Kubernetes master.

-

Users often encounter problems with Kubernetes needing to be configured for the scale. E.g. Kubernetes API server being -too small. We recommend you test your cluster to make sure it can run the number of pods they need, even before -installing Argo. Create pods at the rate you expect that it'll be created in production. Make sure Kubernetes can keep -up with requests to delete pods at the same rate.

-

You'll need to GC data quickly. The less data that Kubernetes and Argo deal with, the less work they need to do. Use -pod GC and workflow GC to achieve this.

-

Overwhelmed Kubernetes API

-

Where Argo has a lot of work to do, the Kubernetes API can be overwhelmed. There are several strategies to reduce this:

-
    -
  • Use the Emissary executor (>= v3.1). This does not make any Kubernetes API requests (except for resources template).
  • -
  • Limit the number of concurrent workflows using parallelism.
  • -
  • Rate-limit pod creation configuration (>= v3.1).
  • -
  • Set DEFAULT_REQUEUE_TIME=1m
  • -
-

Overwhelmed Database

-

If you're running workflows with many nodes, you'll probably be offloading data to a database. Offloaded data is kept -for 5m. You can reduce the number of records created by setting DEFAULT_REQUEUE_TIME=1m. This will slow reconciliation, -but will suit workflows where nodes run for over 1m.

-

Miscellaneous

-

See also Scaling.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/running-at-massive-scale/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/running-locally/index.html b/running-locally/index.html index 1d1ed0738e09..60d69f3f35ca 100644 --- a/running-locally/index.html +++ b/running-locally/index.html @@ -1,4283 +1,11 @@ - - - + - - - - - - - - - - - - Running Locally - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Running Locally - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Running Locally

-

You have two options:

-
    -
  1. Use the Dev Container. This takes about 7 minutes. This can be used with VSCode, the devcontainer CLI, or GitHub Codespaces.
  2. -
  3. Install the requirements on your computer manually. This takes about 1 hour.
  4. -
-

Development Container

-

The development container should be able to do everything you need to do to develop Argo Workflows without installing tools on your local machine. It takes quite a long time to build the container. It runs k3d inside the container so you have a cluster to test against. To communicate with services running either in other development containers or directly on the local machine (e.g. a database), the following URL can be used in the workflow spec: host.docker.internal:<PORT>. This facilitates the implementation of workflows which need to connect to a database or an API server.

-

You can use the development container in a few different ways:

-
    -
  1. Visual Studio Code with Dev Containers extension. Open your argo-workflows folder in VSCode and it should offer to use the development container automatically. VSCode will allow you to forward ports to allow your external browser to access the running components.
  2. -
  3. devcontainer CLI. Once installed, go to your argo-workflows folder and run devcontainer up --workspace-folder . followed by devcontainer exec --workspace-folder . /bin/bash to get a shell where you can build the code. You can use any editor outside the container to edit code; any changes will be mirrored inside the container. Due to a limitation of the CLI, only port 8080 (the Web UI) will be exposed for you to access if you run this way. Other services are usable from the shell inside.
  4. -
  5. GitHub Codespaces. You can start editing as soon as VSCode is open, though you may want to wait for pre-build.sh to finish installing dependencies, building binaries, and setting up the cluster before running any commands in the terminal. Once you start running services (see next steps below), you can click on the "PORTS" tab in the VSCode terminal to see all forwarded ports. You can open the Web UI in a new tab from there.
  6. -
-

Once you have entered the container, continue to Developing Locally.

-

Note:

-
    -
  • -

    for Apple Silicon

    -
      -
    • This platform can spend 3 times the indicated time
    • -
    • Configure Docker Desktop to use BuildKit:
    • -
    -
    "features": {
    -  "buildkit": true
    -},
    -
    -
  • -
  • -

    For Windows WSL2

    -
      -
    • Configure .wslconfig to limit memory usage by the WSL2 to prevent VSCode OOM.
    • -
    -
  • -
  • -

    For Linux

    - -
  • -
-

Requirements

-

Clone the Git repo into: $GOPATH/src/github.com/argoproj/argo-workflows. Any other path will break the code generation.

-

Add the following to your /etc/hosts:

-
127.0.0.1 dex
-127.0.0.1 minio
-127.0.0.1 postgres
-127.0.0.1 mysql
-127.0.0.1 azurite
-
-

To build on your own machine without using the Dev Container you will need:

- -

We recommend using K3D to set up the local Kubernetes cluster since this will allow you to test RBAC -set-up and is fast. You can set-up K3D to be part of your default kube config as follows:

-
k3d cluster start --wait
-
-

Alternatively, you can use Minikube to set up the local Kubernetes cluster. -Once a local Kubernetes cluster has started via minikube start, your kube config will use Minikube's context -automatically.

-
-

Warning

-

Do not use Docker Desktop's embedded Kubernetes, it does not support Kubernetes RBAC (i.e. kubectl auth can-i always returns allowed).

-
-

Developing locally

-

To start:

-
    -
  • The controller, so you can run workflows.
  • -
  • MinIO (http://localhost:9000, use admin/password) so you can use artifacts.
  • -
-

Run:

-
make start
-
-

Make sure you don't see any errors in your terminal. This runs the Workflow Controller locally on your machine (not in Docker/Kubernetes).

-

You can submit a workflow for testing using kubectl:

-
kubectl create -f examples/hello-world.yaml
-
-

We recommend running make clean before make start to ensure recompilation.

-

If you made changes to the executor, you need to build the image:

-
make argoexec-image
-
-

To also start the API on http://localhost:2746:

-
make start API=true
-
-

This runs the Argo Server (in addition to the Workflow Controller) locally on your machine.

-

To also start the UI on http://localhost:8080 (UI=true implies API=true):

-
make start UI=true
-
-

diagram

-

If you are making change to the CLI (i.e. Argo Server), you can build it separately if you want:

-
make cli
-./dist/argo submit examples/hello-world.yaml ;# new CLI is created as `./dist/argo`
-
-

Although, note that this will be built automatically if you do: make start API=true.

-

To test the workflow archive, use PROFILE=mysql or PROFILE=postgres:

-
make start PROFILE=mysql
-
-

You'll have, either:

- -

To test SSO integration, use PROFILE=sso:

-
make start UI=true PROFILE=sso
-
-

Running E2E tests locally

-

Start up Argo Workflows using the following:

-
make start PROFILE=mysql AUTH_MODE=client STATIC_FILES=false API=true
-
-

If you want to run Azure tests against a local Azurite:

-
kubectl -n $KUBE_NAMESPACE apply -f test/e2e/azure/deploy-azurite.yaml
-make start
-
-

Running One Test

-

In most cases, you want to run the test that relates to your changes locally. You should not run all the tests suites. -Our CI will run those concurrently when you create a PR, which will give you feedback much faster.

-

Find the test that you want to run in test/e2e

-
make TestArtifactServer
-
-

Running A Set Of Tests

-

You can find the build tag at the top of the test file.

-
//go:build api
-
-

You need to run make test-{buildTag}, so for api that would be:

-
make test-api
-
-

Diagnosing Test Failure

-

Tests often fail: that's good. To diagnose failure:

-
    -
  • Run kubectl get pods, are pods in the state you expect?
  • -
  • Run kubectl get wf, is your workflow in the state you expect?
  • -
  • What do the pod logs say? I.e. kubectl logs.
  • -
  • Check the controller and argo-server logs. These are printed to the console you ran make start in. Is anything - logged at level=error?
  • -
-

If tests run slowly or time out, factory reset your Kubernetes cluster.

-

Committing

-

Before you commit code and raise a PR, always run:

-
make pre-commit -B
-
-

Please do the following when creating your PR:

- -

Examples:

-
git commit --signoff -m 'fix: Fixed broken thing. Fixes #1234'
-
-
git commit --signoff -m 'feat: Added a new feature. Fixes #1234'
-
-

Troubleshooting

-
    -
  • When running make pre-commit -B, if you encounter errors like - make: *** [pkg/apiclient/clusterworkflowtemplate/cluster-workflow-template.swagger.json] Error 1, ensure that you - have checked out your code into $GOPATH/src/github.com/argoproj/argo-workflows.
  • -
  • If you encounter "out of heap" issues when building UI through Docker, please validate resources allocated to Docker. - Compilation may fail if allocated RAM is less than 4Gi.
  • -
  • To start profiling with pprof, pass ARGO_PPROF=true when starting the controller locally. - Then run the following:
  • -
-
go tool pprof http://localhost:6060/debug/pprof/profile   # 30-second CPU profile
-go tool pprof http://localhost:6060/debug/pprof/heap      # heap profile
-go tool pprof http://localhost:6060/debug/pprof/block     # goroutine blocking profile
-
-

Using Multiple Terminals

-

I run the controller in one terminal, and the UI in another. I like the UI: it is much faster to debug workflows than -the terminal. This allows you to make changes to the controller and re-start it, without restarting the UI (which I -think takes too long to start-up).

-

As a convenience, CTRL=false implies UI=true, so just run:

-
make start CTRL=false
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/running-locally/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/running-nix/index.html b/running-nix/index.html index 7f9f8e47821e..4fe7607ddf94 100644 --- a/running-nix/index.html +++ b/running-nix/index.html @@ -1,4051 +1,11 @@ - - - + - - - - - - - - - - - - Try Argo using Nix - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Try Argo using Nix - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
- -
-
- - -
-
- - - - - - - - -

Try Argo using Nix

-

Nix is a package manager / build tool which focuses on reproducible build environments. -Argo Workflows has some basic support for Nix which is enough to get Argo Workflows up and running with minimal effort. -Here are the steps to follow:

-
    -
  1. Modify your hosts file and set up a Kubernetes cluster according to Running Locally. Don't worry about the other instructions.
  2. -
  3. Install Nix.
  4. -
  5. Run nix develop --extra-experimental-features nix-command --extra-experimental-features flakes ./dev/nix/ --impure (you can add the extra features as a default in your nix.conf file).
  6. -
  7. Run devenv up.
  8. -
-

Warning

-

This is still bare-bones at the moment, any feature in the Makefile not mentioned here is excluded for now. -In practice, this means that only a make start UI=true equivalent is supported at the moment. -As an additional caveat, there are no LDFlags set in the build; as a result the UI will show 0.0.0-unknown for the version.

-

How do I upgrade a dependency?

-

Most dependencies are in the Nix packages repository but if you want a specific version, you might have to build it yourself. -This is fairly trivial in Nix, the idea is to just change the version string to whatever package you are concerned about.

-

Changing a python dependency version

-

If we look at the mkdocs dependency, we see a call to buildPythonPackage, to change the version we need to just modify the version string. -Doing this will display a failure because the hash from the fetchPypi command will now differ, it will also display the correct hash, copy this hash -and replace the existing hash value.

-

Changing a go dependency version

-

The almost exact same principles apply here, the only difference being you must change the vendorHash and the sha256 fields. -The vendorHash is a hash of the vendored dependencies while the sha256 is for the sources fetched from the fetchFromGithub call.

-

Why am I getting a vendorSha256 mismatch ?

-

Unfortunately, dependabot is not capable of upgrading flakes automatically, when the go modules are automatically upgraded the -hash of the vendor dependencies changes but this change isn't automatically reflected in the nix file. The vendorSha256 field that needs to -be upgraded can be found by searching for ${package.name} = pkgs.buildGoModule in the nix file.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/running-nix/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/scaling/index.html b/scaling/index.html index 60522475a813..a41e2737c4aa 100644 --- a/scaling/index.html +++ b/scaling/index.html @@ -1,4170 +1,11 @@ - - - + - - - - - - - - - - - - Scaling - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Scaling - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Scaling

-

For running large workflows, you'll typically need to scale the controller to match.

-

Horizontally Scaling

-

You cannot horizontally scale the controller.

-
-

v3.0

-
-

As of v3.0, the controller supports having a hot-standby for High Availability.

-

Vertically Scaling

-

You can scale the controller vertically in these ways:

-

Container Resource Requests

-

If you observe the Controller using its total CPU or memory requests, you should increase those.

-

Adding Goroutines to Increase Concurrency

-

If you have sufficient CPU cores, you can take advantage of them with more goroutines:

-
    -
  • If you have many Workflows and you notice they're not being reconciled fast enough, increase --workflow-workers.
  • -
  • If you're using TTLStrategy in your Workflows and you notice they're not being deleted fast enough, increase --workflow-ttl-workers.
  • -
  • If you're using PodGC in your Workflows and you notice the Pods aren't being deleted fast enough, increase --pod-cleanup-workers.
  • -
-
-

v3.5 and after

-
-
    -
  • If you're using a lot of CronWorkflows and they don't seem to be firing on time, increase --cron-workflow-workers.
  • -
-

K8S API Client Side Rate Limiting

-

The K8S client library rate limits the messages that can go out.

-

If you frequently see messages similar to this in the Controller log (issued by the library):

-
Waited for 7.090296384s due to client-side throttling, not priority and fairness, request: GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t
-
-

Or, in >= v3.5, if you see warnings similar to this (could be any CR, not just WorkflowTemplate):

-
Waited for 7.090296384s, request:GET:https://10.100.0.1:443/apis/argoproj.io/v1alpha1/namespaces/argo/workflowtemplates/s2t
-
-

Then, if your K8S API Server can handle more requests:

-
    -
  • Increase both --qps and --burst arguments for the Controller. The qps value indicates the average number of queries per second allowed by the K8S Client. The burst value is the number of queries/sec the Client receives before it starts enforcing qps, so typically burst > qps. If not set, the default values are qps=20 and burst=30 (as of v3.5 (refer to cmd/workflow-controller/main.go in case the values change)).
  • -
-

Sharding

-

One Install Per Namespace

-

Rather than running a single installation in your cluster, run one per namespace using the --namespaced flag.

-

Instance ID

-

Within a cluster can use instance ID to run N Argo instances within a cluster.

-

Create one namespace for each Argo, e.g. argo-i1, argo-i2:.

-

Edit workflow-controller-configmap.yaml for each namespace to set an instance ID.

-
apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:
-    instanceID: i1
-
-
-

v2.9 and after

-
-

You may need to pass the instance ID to the CLI:

-
argo --instanceid i1 submit my-wf.yaml
-
-

You do not need to have one instance ID per namespace, you could have many or few.

-

Maximum Recursion Depth

-

In order to protect users against infinite recursion, the controller has a default maximum recursion depth of 100 calls to templates.

-

This protection can be disabled with the environment variable DISABLE_MAX_RECURSION=true

-

Miscellaneous

-

See also Running At Massive Scale.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/scaling/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/security/index.html b/security/index.html index 20b32c98da7d..93e259126a44 100644 --- a/security/index.html +++ b/security/index.html @@ -1,4230 +1,11 @@ - - - + - - - - - - - - - - - - Security - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Security - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Security

-

To report security issues.

-

💡 Read Practical Argo Workflows Hardening.

-

Workflow Controller Security

-

This has three parts.

-

Controller Permissions

-

The controller has permission (via Kubernetes RBAC + its config map) with either all namespaces (cluster-scope install) or a single managed namespace (namespace-install), notably:

-
    -
  • List/get/update workflows, and cron-workflows.
  • -
  • Create/get/delete pods, PVCs, and PDBs.
  • -
  • List/get template, config maps, service accounts, and secrets.
  • -
-

See workflow-controller-cluster-role.yaml or workflow-controller-role.yaml

-

User Permissions

-

Users minimally need permission to create/read workflows. The controller will then create workflow pods (config maps etc) on behalf of the users, even if the user does not have permission to do this themselves. The controller will only create workflow pods in the workflow's namespace.

-

A way to think of this is that, if the user has permission to create a workflow in a namespace, then it is OK to create pods or anything else for them in that namespace.

-

If the user only has permission to create workflows, then they will be typically unable to configure other necessary resources such as config maps, or view the outcome of their workflow. This is useful when the user is a service.

-
-

Warning

-

If you allow users to create workflows in the controller's namespace (typically argo), it may be possible for users to modify the controller itself. In a namespace-install the managed namespace should therefore not be the controller's namespace.

-
-

You can typically further restrict what a user can do to just being able to submit workflows from templates using the workflow restrictions feature.

-

UI Access

-

If you want a user to have read-only access to the entirety of the Argo UI for their namespace, a sample role for them may look like:

-
apiVersion: rbac.authorization.k8s.io/v1
-kind: Role
-metadata:
-  name: ui-user-read-only
-rules:
-  # k8s standard APIs
-  - apiGroups:
-      - ""
-    resources:
-      - events
-      - pods
-      - pods/log
-    verbs:
-      - get
-      - list
-      - watch
-  # Argo APIs. See also https://github.com/argoproj/argo-workflows/blob/main/manifests/cluster-install/workflow-controller-rbac/workflow-aggregate-roles.yaml#L4
-  - apiGroups:
-      - argoproj.io
-    resources:
-      - eventsources
-      - sensors
-      - workflows
-      - workfloweventbindings
-      - workflowtemplates
-      - clusterworkflowtemplates
-      - cronworkflows
-      - cronworkflows
-      - workflowtaskresults
-    verbs:
-      - get
-      - list
-      - watch
-
-

Workflow Pod Permissions

-

Workflow pods run using either:

-
    -
  • The default service account.
  • -
  • The service account declared in the workflow spec.
  • -
-

There is no restriction on which service account in a namespace may be used.

-

This service account typically needs permissions.

-

Different service accounts should be used if a workflow pod needs to have elevated permissions, e.g. to create other resources.

-

The main container will have the service account token mounted, allowing the main container to patch pods (among other permissions). Set automountServiceAccountToken to false to prevent this. See fields.

-

By default, workflows pods run as root. To further secure workflow pods, set the workflow pod security context.

-

You should configure the controller with the correct workflow executor for your trade off between security and scalability.

-

These settings can be set by default using workflow defaults.

-

Argo Server Security

-

Argo Server implements security in three layers.

-

Firstly, you should enable transport layer security to ensure your data cannot be read in transit.

-

Secondly, you should enable an authentication mode to ensure that you do not run workflows from unknown users.

-

Finally, you should configure the argo-server role and role binding with the correct permissions.

-

Read-Only

-

You can achieve this by configuring the argo-server role (example with only read access (i.e. only get/list/watch verbs)).

-

Network Security

-

Argo Workflows requires various levels of network access depending on configuration and the features enabled. The following describes the different workflow components and their network access needs, to help provide guidance on how to configure the argo namespace in a secure manner (e.g. NetworkPolicy).

-

Argo Server

-

The Argo Server is commonly exposed to end-users to provide users with a UI for visualizing and managing their workflows. It must also be exposed if leveraging webhooks to trigger workflows. Both of these use cases require that the argo-server Service to be exposed for ingress traffic (e.g. with an Ingress object or load balancer). Note that the Argo UI is also available to be accessed by running the server locally (i.e. argo server) using local KUBECONFIG credentials, and visiting the UI over https://localhost:2746.

-

The Argo Server additionally has a feature to allow downloading of artifacts through the UI. This feature requires that the argo-server be given egress access to the underlying artifact provider (e.g. S3, GCS, MinIO, Artifactory, Azure Blob Storage) in order to download and stream the artifact.

-

Workflow Controller

-

The workflow-controller Deployment exposes a Prometheus metrics endpoint (workflow-controller-metrics:9090) so that a Prometheus server can periodically scrape for controller level metrics. Since Prometheus is typically running in a separate namespace, the argo namespace should be configured to allow cross-namespace ingress access to the workflow-controller-metrics Service.

-

Database access

-

A persistent store can be configured for either archiving or offloading workflows. If either of these features are enabled, both the workflow-controller and argo-server Deployments will need egress network access to the external database used for archiving/offloading.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/security/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/service-accounts/index.html b/service-accounts/index.html index 7e81230ca546..a1e8f1a2d605 100644 --- a/service-accounts/index.html +++ b/service-accounts/index.html @@ -1,4014 +1,11 @@ - - - + - - - - - - - - - - - - Service Accounts - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Service Accounts - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Service Accounts

-

Configure the service account to run Workflows

-

Roles, Role-Bindings, and Service Accounts

-

In order for Argo to support features such as artifacts, outputs, access to secrets, etc. it needs to communicate with Kubernetes resources -using the Kubernetes API. To communicate with the Kubernetes API, Argo uses a ServiceAccount to authenticate itself to the Kubernetes API. -You can specify which Role (i.e. which permissions) the ServiceAccount that Argo uses by binding a Role to a ServiceAccount using a RoleBinding

-

Then, when submitting Workflows you can specify which ServiceAccount Argo uses using:

-
argo submit --serviceaccount <name>
-
-

When no ServiceAccount is provided, Argo will use the default ServiceAccount from the namespace from which it is run, which will almost always have insufficient privileges by default.

-

For more information about granting Argo the necessary permissions for your use case see Workflow RBAC.

-

Granting admin privileges

-

For the purposes of this demo, we will grant the default ServiceAccount admin privileges (i.e., we will bind the admin Role to the default ServiceAccount of the current namespace):

-
kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=argo:default -n argo
-
-

Note that this will grant admin privileges to the default ServiceAccount in the namespace that the command is run from, so you will only be able to -run Workflows in the namespace where the RoleBinding was made.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/service-accounts/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/sidecar-injection/index.html b/sidecar-injection/index.html index a91bb20a15a7..b440cab5213f 100644 --- a/sidecar-injection/index.html +++ b/sidecar-injection/index.html @@ -1,4053 +1,11 @@ - - - + - - - - - - - - - - - - Sidecar Injection - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Sidecar Injection - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Sidecar Injection

-

Automatic (i.e. mutating webhook based) sidecar injection systems, including service meshes such as Anthos and Istio -Proxy, create a unique problem for Kubernetes workloads that run to completion.

-

Because sidecars are injected outside of the view of the workflow controller, the controller has no awareness of them. -It has no opportunity to rewrite the containers command (when using the Emissary Executor) and as the sidecar's process -will run as PID 1, which is protected. It can be impossible for the wait container to terminate the sidecar.

-

You will minimize problems by not using Istio with Argo Workflows.

-

See #1282.

-

Support Matrix

-

Key:

-
    -
  • Unsupported - this executor is no longer supported
  • -
  • Any - we can kill any image
  • -
  • KubectlExec - we kill images by running kubectl exec
  • -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ExecutorSidecarInjected Sidecar
dockerAnyUnsupported
emissaryAnyKubectlExec
k8sapiShellKubectlExec
kubeletShellKubectlExec
pnsAnyAny
-

How We Kill Sidecars Using kubectl exec

-
-

v3.1 and after

-
-

Kubernetes does not provide a way to kill a single container. You can delete a pod, but this kills all containers, and loses all information -and logs of that pod.

-

Instead, try to mimic the Kubernetes termination behavior, which is:

-
    -
  1. SIGTERM PID 1
  2. -
  3. Wait for the pod's terminateGracePeriodSeconds (30s by default).
  4. -
  5. SIGKILL PID 1
  6. -
-

The following are not supported:

-
    -
  • preStop
  • -
  • STOPSIGNAL
  • -
-

To do this, it must be possible to run a kubectl exec command that kills the injected sidecar. By default it runs /bin/sh -c 'kill 1'. This can fail:

-
    -
  1. No /bin/sh.
  2. -
  3. Process is not running as PID 1 (which is becoming the default these days due to runAsNonRoot).
  4. -
  5. Process does not correctly respond to kill 1 (e.g. some shell script weirdness).
  6. -
-

You can override the kill command by using a pod annotation (where %d is the signal number), for example:

-
spec:
-  podMetadata:
-    annotations:
-      workflows.argoproj.io/kill-cmd-istio-proxy: '["pilot-agent", "request", "POST", "quitquitquit"]'
-      workflows.argoproj.io/kill-cmd-vault-agent: '["sh", "-c", "kill -%d 1"]'
-      workflows.argoproj.io/kill-cmd-sidecar: '["sh", "-c", "kill -%d $(pidof entrypoint.sh)"]'
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/sidecar-injection/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/static-code-analysis/index.html b/static-code-analysis/index.html index cf4be3dfe3f9..1b24e8337e49 100644 --- a/static-code-analysis/index.html +++ b/static-code-analysis/index.html @@ -1,3916 +1,11 @@ - - - + - - - - - - - - - - - - Static Code Analysis - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Static Code Analysis - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Static Code Analysis

-

We use the following static code analysis tools:

-
    -
  • golangci-lint and eslint for compile time linting.
  • -
  • Snyk for dependency and image scanning (SCA).
  • -
-

These are at least run daily or on each pull request.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/static-code-analysis/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/stress-testing/index.html b/stress-testing/index.html index e5e5672f8a78..ad2d9709931d 100644 --- a/stress-testing/index.html +++ b/stress-testing/index.html @@ -1,4003 +1,11 @@ - - - + - - - - - - - - - - - - Stress Testing - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Stress Testing - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Stress Testing

-

Install gcloud binary.

-
# Login to GCP:
-gloud auth login
-
-# Set-up your config (if needed):
-gcloud config set project alex-sb
-
-# Create a cluster (default region is us-west-2, if you're not in west of the USA, you might want at different region):
-gcloud container clusters create-auto argo-workflows-stress-1
-
-# Get credentials:
-gcloud container clusters get-credentials argo-workflows-stress-1                             
-
-# Install workflows (If this fails, try running it again):
-make start PROFILE=stress
-
-# Make sure pods are running:
-kubectl get deployments
-
-# Run a test workflow:
-argo submit examples/hello-world.yaml --watch
-
-

Checks

- -

Run go run ./test/stress/tool -n 10000 to run a large number of workflows.

-

Check Prometheus:

-
    -
  1. See how many Kubernetes API requests are being made. You will see about one Update workflows - per reconciliation, multiple Create pods. You should expect to see one Get workflowtemplates per workflow (done - on first reconciliation). Otherwise, if you see anything else, that might be a problem.
  2. -
  3. How many errors were logged? log_messages{level="error"} What was the cause?
  4. -
-

Check PProf to see if there any any hot spots:

-
go tool pprof -png http://localhost:6060/debug/pprof/allocs
-go tool pprof -png http://localhost:6060/debug/pprof/heap
-go tool pprof -png http://localhost:6060/debug/pprof/profile
-
-

Clean-up

-
gcloud container clusters delete argo-workflows-stress-1
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/stress-testing/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/survey-data-privacy/index.html b/survey-data-privacy/index.html index 3bf7623ac430..e7defa21b272 100644 --- a/survey-data-privacy/index.html +++ b/survey-data-privacy/index.html @@ -1,3911 +1,11 @@ - - - + - - - - - - - - - - - - Survey Data Privacy - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Survey Data Privacy - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
- -
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/survey-data-privacy/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/suspend-template/index.html b/suspend-template/index.html index 9372338dba2f..f75dd266e1e9 100644 --- a/suspend-template/index.html +++ b/suspend-template/index.html @@ -1,3916 +1,11 @@ - - - + - - - - - - - - - - - - Suspend Template - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Suspend Template - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Suspend Template

-
-

v2.1

-
-

See Suspending.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/suspend-template/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/swagger/index.html b/swagger/index.html index 4104897ba015..ca50eb4dae0c 100644 --- a/swagger/index.html +++ b/swagger/index.html @@ -1,3937 +1,11 @@ - - - + - - - - - - - - - - - - API Reference - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + SwaggerUI + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

API Reference

- - - - - - - SwaggerUI - - - -
- - - - - - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/swagger/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/synchronization/index.html b/synchronization/index.html index a024e034bcdf..e1cb4d90e029 100644 --- a/synchronization/index.html +++ b/synchronization/index.html @@ -1,4142 +1,11 @@ - - - + - - - - - - - - - - - - Synchronization - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Synchronization - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Synchronization

-
-

v2.10 and after

-
-

Introduction

-

Synchronization enables users to limit the parallel execution of certain workflows or -templates within a workflow without having to restrict others.

-

Users can create multiple synchronization configurations in the ConfigMap that can be referred to -from a workflow or template within a workflow. Alternatively, users can -configure a mutex to prevent concurrent execution of templates or -workflows using the same mutex.

-

For example:

-
apiVersion: v1
-kind: ConfigMap
-metadata:
- name: my-config
-data:
-  workflow: "1"  # Only one workflow can run at given time in particular namespace
-  template: "2"  # Two instances of template can run at a given time in particular namespace
-
-

Workflow-level Synchronization

-

Workflow-level synchronization limits parallel execution of the workflow if workflows have the same synchronization reference. -In this example, Workflow refers to workflow synchronization key which is configured as limit 1, -so only one workflow instance will be executed at given time even multiple workflows created.

-

Using a semaphore configured by a ConfigMap:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: synchronization-wf-level-
-spec:
-  entrypoint: whalesay
-  synchronization:
-    semaphore:
-      configMapKeyRef:
-        name: my-config
-        key: workflow
-  templates:
-  - name: whalesay
-    container:
-      image: docker/whalesay:latest
-      command: [cowsay]
-      args: ["hello world"]
-
-

Using a mutex:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: synchronization-wf-level-
-spec:
-  entrypoint: whalesay
-  synchronization:
-    mutex:
-      name: workflow
-  templates:
-  - name: whalesay
-    container:
-      image: docker/whalesay:latest
-      command: [cowsay]
-      args: ["hello world"]
-
-

Template-level Synchronization

-

Template-level synchronization limits parallel execution of the template across workflows, if templates have the same synchronization reference. -In this example, acquire-lock template has synchronization reference of template key which is configured as limit 2, -so two instances of templates will be executed at a given time: even multiple steps/tasks within workflow or different workflows referring to the same template.

-

Using a semaphore configured by a ConfigMap:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: synchronization-tmpl-level-
-spec:
-  entrypoint: synchronization-tmpl-level-example
-  templates:
-  - name: synchronization-tmpl-level-example
-    steps:
-    - - name: synchronization-acquire-lock
-        template: acquire-lock
-        arguments:
-          parameters:
-          - name: seconds
-            value: "{{item}}"
-        withParam: '["1","2","3","4","5"]'
-
-  - name: acquire-lock
-    synchronization:
-      semaphore:
-        configMapKeyRef:
-          name: my-config
-          key: template
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["sleep 10; echo acquired lock"]
-
-

Using a mutex:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: synchronization-tmpl-level-
-spec:
-  entrypoint: synchronization-tmpl-level-example
-  templates:
-  - name: synchronization-tmpl-level-example
-    steps:
-    - - name: synchronization-acquire-lock
-        template: acquire-lock
-        arguments:
-          parameters:
-          - name: seconds
-            value: "{{item}}"
-        withParam: '["1","2","3","4","5"]'
-
-  - name: acquire-lock
-    synchronization:
-      mutex:
-        name: template
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["sleep 10; echo acquired lock"]
-
-

Examples:

-
    -
  1. Workflow level semaphore
  2. -
  3. Workflow level mutex
  4. -
  5. Step level semaphore
  6. -
  7. Step level mutex
  8. -
-

Other Parallelism support

-

In addition to this synchronization, the workflow controller supports a parallelism setting that applies to all workflows -in the system (it is not granular to a class of workflows, or tasks withing them). Furthermore, there is a parallelism setting -at the workflow and template level, but this only restricts total concurrent executions of tasks within the same workflow.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/synchronization/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/template-defaults/index.html b/template-defaults/index.html index 1abd80dc8b8c..62878f010811 100644 --- a/template-defaults/index.html +++ b/template-defaults/index.html @@ -1,4030 +1,11 @@ - - - + - - - - - - - - - - - - Template Defaults - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Template Defaults - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Template Defaults

-
-

v3.1 and after

-
-

Introduction

-

TemplateDefaults feature enables the user to configure the default template values in workflow spec level that will apply to all the templates in the workflow. If the template has a value that also has a default value in templateDefault, the Template's value will take precedence. These values will be applied during the runtime. Template values and default values are merged using Kubernetes strategic merge patch. To check whether and how list values are merged, inspect the patchStrategy and patchMergeKey tags in the workflow definition.

-

Configuring templateDefaults in WorkflowSpec

-

For example:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  name: template-defaults-example
-spec:
-  entrypoint: main
-  templateDefaults:
-    timeout: 30s   # timeout value will be applied to all templates
-    retryStrategy: # retryStrategy value will be applied to all templates
-      limit: 2
-  templates:
-  - name: main
-    container:
-      image: docker/whalesay:latest
-
-

template defaults example

-

Configuring templateDefaults in Controller Level

-

Operator can configure the templateDefaults in workflow defaults. This templateDefault will be applied to all the workflow which runs on the controller.

-

The following would be specified in the Config Map:

-
apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:
-  # Default values that will apply to all Workflows from this controller, unless overridden on the Workflow-level
-  workflowDefaults: |
-    metadata:
-      annotations:
-        argo: workflows
-      labels:
-        foo: bar
-    spec:
-      ttlStrategy:
-        secondsAfterSuccess: 5
-      templateDefaults:
-        timeout: 30s
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/template-defaults/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/tls/index.html b/tls/index.html index 42e4b145ae8a..d577286fe68b 100644 --- a/tls/index.html +++ b/tls/index.html @@ -1,4112 +1,11 @@ - - - + - - - - - - - - - - - - Transport Layer Security - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Transport Layer Security - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Transport Layer Security

-
-

v2.8 and after

-
-

If you're running Argo Server you have three options with increasing transport security (note - you should also be -running authentication):

-

Default configuration

-
-

v2.8 - 2.12

-
-

Defaults to Plain Text

-
-

v3.0 and after

-
-

Defaults to Encrypted if cert is available

-

Argo image/deployment defaults to Encrypted with a self-signed certificate which expires after 365 days.

-

Plain Text

-

Recommended for: development.

-

Everything is sent in plain text.

-

Start Argo Server with the --secure=false (or ARGO_SECURE=false) flag, e.g.:

-
export ARGO_SECURE=false
-argo server --secure=false
-
-

To secure the UI you may front it with a HTTPS proxy.

-

Encrypted

-

Recommended for: development and test environments.

-

You can encrypt connections without any real effort.

-

Start Argo Server with the --secure flag, e.g.:

-
argo server --secure
-
-

It will start with a self-signed certificate that expires after 365 days.

-

Run the CLI with --secure (or ARGO_SECURE=true) and --insecure-skip-verify (or ARGO_INSECURE_SKIP_VERIFY=true).

-
argo --secure --insecure-skip-verify list
-
-
export ARGO_SECURE=true
-export ARGO_INSECURE_SKIP_VERIFY=true
-argo --secure --insecure-skip-verify list
-
-

Tip: Don't forget to update your readiness probe to use HTTPS. To do so, edit your argo-server -Deployment's readinessProbe spec:

-
readinessProbe:
-    httpGet: 
-        scheme: HTTPS
-
-

Encrypted and Verified

-

Recommended for: production environments.

-

Run your HTTPS proxy in front of the Argo Server. You'll need to set-up your certificates (this is out of scope of this -documentation).

-

Start Argo Server with the --secure flag, e.g.:

-
argo server --secure
-
-

As before, it will start with a self-signed certificate that expires after 365 days.

-

Run the CLI with --secure (or ARGO_SECURE=true) only.

-
argo --secure list
-
-
export ARGO_SECURE=true
-argo list
-
-

TLS Min Version

-

Set TLS_MIN_VERSION to be the minimum TLS version to use. This is v1.2 by default.

-

This must be one of these int values.

- - - - - - - - - - - - - - - - - - - - - - - - - -
VersionValue
v1.0769
v1.1770
v1.2771
v1.3772
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/tls/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/tolerating-pod-deletion/index.html b/tolerating-pod-deletion/index.html index 49faea9878ea..4942ab757330 100644 --- a/tolerating-pod-deletion/index.html +++ b/tolerating-pod-deletion/index.html @@ -1,3987 +1,11 @@ - - - + - - - - - - - - - - - - Tolerating Pod Deletion - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Tolerating Pod Deletion - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Tolerating Pod Deletion

-
-

v2.12 and after

-
-

In Kubernetes, pods are cattle and can be deleted at any time. Deletion could be manually via kubectl delete pod, during a node drain, or for other reasons.

-

This can be very inconvenient, your workflow will error, but for reasons outside of your control.

-

A pod disruption budget can reduce the likelihood of this happening. But, it cannot entirely prevent it.

-

To retry pods that were deleted, set retryStrategy.retryPolicy: OnError.

-

This can be set at a workflow-level, template-level, or globally (using workflow defaults)

-

Example

-

Run the following workflow (which will sleep for 30s):

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  name: example
-spec:
-  retryStrategy:
-   retryPolicy: OnError
-   limit: 1
-  entrypoint: main
-  templates:
-    - name: main
-      container:
-        image: docker/whalesay:latest
-        command:
-          - sleep
-          - 30s
-
-

Then execute kubectl delete pod example. You'll see that the errored node is automatically retried.

-

💡 Read more on architecting workflows for reliability.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/tolerating-pod-deletion/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/training/index.html b/training/index.html index 65ec0e55c035..d3118b38b2fb 100644 --- a/training/index.html +++ b/training/index.html @@ -1,3991 +1,11 @@ - - - + - - - - - - - - - - - - Training - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Training - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Training

-

Videos

-

We also have a YouTube playlist of videos that includes workshops you can follow along with:

-

Videos Screenshot Open the playlist

-

Hands-On

-

We've created a Killercoda course featuring beginner and intermediate lessons. These allow to you try out Argo Workflows in your web browser without needing to install anything on your computer. Each lesson starts up a Kubernetes cluster that you can access via a web browser.

-

Additional resources

-

Visit the awesome-argo GitHub repo for more educational resources.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/training/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/upgrading/index.html b/upgrading/index.html index f9e6327c82b5..709c046da1ad 100644 --- a/upgrading/index.html +++ b/upgrading/index.html @@ -1,4677 +1,11 @@ - - - + - - - - - - - - - - - - Upgrading Guide - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Upgrading Guide - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Upgrading Guide

-

Breaking changes typically (sometimes we don't realise they are breaking) have "!" in the commit message, as per -the conventional commits.

-

Upgrading to v3.5

-

There are no known breaking changes in this release. Please file an issue if you encounter any unexpected problems after upgrading.

-

Upgrading to v3.4

-

Non-Emissary executors are removed. (#7829)

-

Emissary executor is now the only supported executor. If you are using other executors, e.g. docker, k8sapi, pns, and kubelet, you need to -remove your containerRuntimeExecutors and containerRuntimeExecutor from your controller's configmap. If you have workflows that use different -executors with the label workflows.argoproj.io/container-runtime-executor, this is no longer supported and will not be effective.

-

chore!: Remove dataflow pipelines from codebase. (#9071)

-

You are affected if you are using dataflow pipelines in the UI or via the /pipelines endpoint. -We no longer support dataflow pipelines and all relevant code has been removed.

-

feat!: Add entrypoint lookup. Fixes #8344

-

Affected if:

-
    -
  • Using the Emissary executor.
  • -
  • Used the args field for any entry in images.
  • -
-

This PR automatically looks up the command and entrypoint. The implementation for config look-up was incorrect (it -allowed you to specify args but not entrypoint). args has been removed to correct the behaviour.

-

If you are incorrectly configured, the workflow controller will error on start-up.

-

Actions

-

You don't need to configure images that use v2 manifests anymore. You can just remove them (e.g. argoproj/argosay:v2):

-
% docker manifest inspect argoproj/argosay:v2
-...
-"schemaVersion": 2,
-...
-
-

For v1 manifests (e.g. docker/whalesay:latest):

-
% docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' docker/whalesay:latest
-[] [/bin/bash]
-
-
images:
-  docker/whalesay:latest:
-    cmd: [/bin/bash]
-
-

feat: Fail on invalid config. (#8295)

-

The workflow controller will error on start-up if incorrectly configured, rather than silently ignoring -mis-configuration.

-
Failed to register watch for controller config map: error unmarshaling JSON: while decoding JSON: json: unknown field \"args\"
-
-

feat: add indexes for improve archived workflow performance. (#8860)

-

This PR adds indexes to archived workflow tables. This change may cause a long time to upgrade if the user has a large table.

-

feat: enhance artifact visualization (#8655)

-

For AWS users using S3: visualizing artifacts in the UI and downloading them now requires an additional "Action" to be configured in your S3 bucket policy: "ListBucket".

-

Upgrading to v3.3

-

662a7295b feat: Replace patch pod with create workflowtaskresult. Fixes #3961 (#8000)

-

The PR changes the permissions that can be used by a workflow to remove the pod patch permission.

-

See workflow RBAC and #8013.

-

06d4bf76f fix: Reduce agent permissions. Fixes #7986 (#7987)

-

The PR changes the permissions used by the agent to report back the outcome of HTTP template requests. The permission patch workflowtasksets/status replaces patch workflowtasksets, for example:

-
apiVersion: rbac.authorization.k8s.io/v1
-kind: Role
-metadata:
-  name: agent
-rules:
-  - apiGroups:
-      - argoproj.io
-    resources:
-      - workflowtasksets/status
-    verbs:
-      - patch
-
-

Workflows running during any upgrade should be give both permissions.

-

See #8013.

-

feat!: Remove deprecated config flags

-

This PR removes the following configmap items -

-
    -
  • executorImage (use executor.image in configmap instead) - e.g. - Workflow controller configmap similar to the following one given below won't be valid anymore:
  • -
-
apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:
-  ...
-  executorImage: argoproj/argocli:latest
-  ...
-
-

From now and onwards, only provide the executor image in workflow controller as a command argument as shown below:

-
apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:
-  ...
-  executor: |
-    image: argoproj/argocli:latest
-  ...
-
-
    -
  • executorImagePullPolicy (use executor.imagePullPolicy in configmap instead) - e.g. - Workflow controller configmap similar to the following one given below won't be valid anymore:
  • -
-
data:
-  ...
-  executorImagePullPolicy: IfNotPresent
-  ...
-
-

Change it as shown below:

-
data:
-  ...
-  executor: |
-    imagePullPolicy: IfNotPresent
-  ...
-
-
    -
  • executorResources (use executor.resources in configmap instead) - e.g. - Workflow controller configmap similar to the following one given below won't be valid anymore:
  • -
-
data:
-  ...
-  executorResources:
-    requests:
-      cpu: 0.1
-      memory: 64Mi
-    limits:
-      cpu: 0.5
-      memory: 512Mi
-  ...
-
-

Change it as shown below:

-
data:
-  ...
-  executor: |
-    resources:
-      requests:
-        cpu: 0.1
-        memory: 64Mi
-      limits:
-        cpu: 0.5
-        memory: 512Mi
-  ...
-
-

fce82d572 feat: Remove pod workers (#7837)

-

This PR removes pod workers from the code, the pod informer directly writes into the workflow queue. As a result the --pod-workers flag has been removed.

-

93c11a24ff feat: Add TLS to Metrics and Telemetry servers (#7041)

-

This PR adds the ability to send metrics over TLS with a self-signed certificate. In v3.5 this will be enabled by default, so it is recommended that users enable this functionality now.

-

0758eab11 feat(server)!: Sync dispatch of webhook events by default

-

This is not expected to impact users.

-

Events dispatch in the Argo Server has been change from async to sync by default. This is so that errors are surfaced to -the client, rather than only appearing as logs or Kubernetes events. It is possible that response times under load are -too long for your client and you may prefer to revert this behaviour.

-

To revert this behaviour, restart Argo Server with ARGO_EVENT_ASYNC_DISPATCH=true. Make sure that asyncDispatch=true -is logged.

-

bd49c6303 fix(artifact)!: default https to any URL missing a scheme. Fixes #6973

-

HTTPArtifact without a scheme will now defaults to https instead of http

-

user need to explicitly include a http prefix if they want to retrieve HTTPArtifact through http

-

chore!: Remove the hidden flag --verify from argo submit

-

The hidden flag --verify has been removed from argo submit. This is a internal testing flag we don't need anymore.

-

Upgrading to v3.2

-

e5b131a33 feat: Add template node to pod name. Fixes #1319 (#6712)

-

This add the template name to the pod name, to make it easier to understand which pod ran which step. This behaviour can be reverted by setting POD_NAMES=v1 on the workflow controller.

-

be63efe89 feat(executor)!: Change argoexec base image to alpine. Closes #5720 (#6006)

-

Changing from Debian to Alpine reduces the size of the argoexec image, resulting is faster starting workflow pods, and it also reduce the risk of security issues. There is not such thing as a free lunch. There maybe other behaviour changes we don't know of yet.

-

Some users found this change prevented workflow with very large parameters from running. See #7586

-

48d7ad3 chore: Remove onExit naming transition scaffolding code (#6297)

-

When upgrading from <v2.12 to >v3.2 workflows that are running at the time of the upgrade and have onExit steps may experience the onExit step running twice. This is only applicable for workflows that began running before a workflow-controller upgrade and are still running after the upgrade is complete. This is only applicable for upgrading from v2.12 or earlier directly to v3.2 or later. Even under these conditions, duplicate work may not be experienced.

-

Upgrading to v3.1

-

3fff791e4 build!: Automatically add manifests to v* tags (#5880)

-

The manifests in the repository on the tag will no longer contain the image tag, instead they will contain :latest.

-
    -
  • You must not get your manifests from the Git repository, you must get them from the release notes.
  • -
  • You must not use the stable tag. This is defunct, and will be removed in v3.1.
  • -
-

ab361667a feat(controller) Emissary executor. (#4925)

-

The Emissary executor is not a breaking change per-se, but it is brand new so we would not recommend you use it by default yet. Instead, we recommend you test it out on some workflows using a workflow-controller-configmap configuration.

-
# Specifies the executor to use.
-#
-# You can use this to:
-# * Tailor your executor based on your preference for security or performance.
-# * Test out an executor without committing yourself to use it for every workflow.
-#
-# To find out which executor was actually use, see the `wait` container logs.
-#
-# The list is in order of precedence; the first matching executor is used.
-# This has precedence over `containerRuntimeExecutor`.
-containerRuntimeExecutors: |
-  - name: emissary
-    selector:
-      matchLabels:
-        workflows.argoproj.io/container-runtime-executor: emissary
-
-

be63efe89 feat(controller): Expression template tags. Resolves #4548 & #1293 (#5115)

-

This PR introduced a new expression syntax know as "expression tag template". A user has reported that this does not -always play nicely with the when condition syntax (Goevaluate).

-

This can be resolved using a single quote in your when expression:

-
when: "'{{inputs.parameters.should-print}}' != '2021-01-01'"
-
-

Learn more

-

Upgrading to v3.0

-

defbd600e fix: Default ARGO_SECURE=true. Fixes #5607 (#5626)

-

The server now starts with TLS enabled by default if a key is available. The original behaviour can be configured with --secure=false.

-

If you have an ingress, you may need to add the appropriate annotations:(varies by ingress):

-
alb.ingress.kubernetes.io/backend-protocol: HTTPS
-nginx.ingress.kubernetes.io/backend-protocol: HTTPS
-
-

01d310235 chore(server)!: Required authentication by default. Resolves #5206 (#5211)

-

To login to the user interface, you must provide a login token. The original behaviour can be configured with --auth-mode=server.

-

f31e0c6f9 chore!: Remove deprecated fields (#5035)

-

Some fields that were deprecated in early 2020 have been removed.

- - - - - - - - - - - - - - - - - -
FieldAction
template.template and template.templateRefThe workflow spec must be changed to use steps or DAG, otherwise the workflow will error.
spec.ttlSecondsAfterFinishedchange to spec.ttlStrategy.secondsAfterCompletion, otherwise the workflow will not be garbage collected as expected.
-

To find impacted workflows:

-
kubectl get wf --all-namespaces -o yaml | grep templateRef
-kubectl get wf --all-namespaces -o yaml | grep ttlSecondsAfterFinished
-
-

c8215f972 feat(controller)!: Key-only artifacts. Fixes #3184 (#4618)

-

This change is not breaking per-se, but many users do not appear to aware of artifact repository ref, so check your usage of that feature if you have problems.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/upgrading/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/use-cases/ci-cd/index.html b/use-cases/ci-cd/index.html index 13e537bd155f..71e0e75aa742 100644 --- a/use-cases/ci-cd/index.html +++ b/use-cases/ci-cd/index.html @@ -1,3985 +1,11 @@ - - - + - - - - - - - - - - - - CI/CD - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + CI/CD - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - - -
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/use-cases/ci-cd/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/use-cases/data-processing/index.html b/use-cases/data-processing/index.html index 8f611333df7c..1997bdcc4861 100644 --- a/use-cases/data-processing/index.html +++ b/use-cases/data-processing/index.html @@ -1,3998 +1,11 @@ - - - + - - - - - - - - - - - - Data Processing - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Data Processing - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
- -
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/use-cases/data-processing/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/use-cases/infrastructure-automation/index.html b/use-cases/infrastructure-automation/index.html index 2ec6546f5ff9..1bfe07852913 100644 --- a/use-cases/infrastructure-automation/index.html +++ b/use-cases/infrastructure-automation/index.html @@ -1,3984 +1,11 @@ - - - + - - - - - - - - - - - - Infrastructure Automation - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Infrastructure Automation - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - - -
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/use-cases/infrastructure-automation/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/use-cases/machine-learning/index.html b/use-cases/machine-learning/index.html index e9dd0afef061..5fe8400e54a1 100644 --- a/use-cases/machine-learning/index.html +++ b/use-cases/machine-learning/index.html @@ -1,4010 +1,11 @@ - - - + - - - - - - - - - - - - Machine Learning - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Machine Learning - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - - -
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/use-cases/machine-learning/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/use-cases/other/index.html b/use-cases/other/index.html index 82fed31c5b8f..39e881251940 100644 --- a/use-cases/other/index.html +++ b/use-cases/other/index.html @@ -1,3964 +1,11 @@ - - - + - - - - - - - - - - - - Other - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Other - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
- -
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/use-cases/other/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/use-cases/stream-processing/index.html b/use-cases/stream-processing/index.html index d51a8713887d..e260dbdcfec7 100644 --- a/use-cases/stream-processing/index.html +++ b/use-cases/stream-processing/index.html @@ -1,3913 +1,11 @@ - - - + - - - - - - - - - - - - Stream Processing - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Stream Processing - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
- -
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/use-cases/stream-processing/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/use-cases/webhdfs/index.html b/use-cases/webhdfs/index.html index ad3d1cd81865..7b6af46c93b1 100644 --- a/use-cases/webhdfs/index.html +++ b/use-cases/webhdfs/index.html @@ -1,4026 +1,11 @@ - - - + - - - - - - - - - - - - webHDFS via HTTP artifacts - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + webHDFS via HTTP artifacts - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

webHDFS via HTTP artifacts

-

webHDFS is a protocol allowing to access Hadoop or similar data storage via a unified REST API.

-

Input Artifacts

-

You can use HTTP artifacts to connect to webHDFS, where the URL will be the webHDFS endpoint including the file path and any query parameters. -Suppose your webHDFS endpoint is available under https://mywebhdfsprovider.com/webhdfs/v1/ and you have a file my-art.txt located in a data folder, which you want to use as an input artifact. To construct the URL, you append the file path to the base webHDFS endpoint and set the OPEN operation via query parameter. The result is: https://mywebhdfsprovider.com/webhdfs/v1/data/my-art.txt?op=OPEN. -See the below Workflow which will download the specified webHDFS artifact into the specified path:

-
spec:
-  # ...
-  inputs:
-    artifacts:
-    - name: my-art
-    path: /my-artifact
-    http:
-      url: "https://mywebhdfsprovider.com/webhdfs/v1/file.txt?op=OPEN"
-
-

Additional fields can be set for HTTP artifacts (for example, headers). See usage in the full webHDFS example.

-

Output Artifacts

-

To declare a webHDFS output artifact, instead use the CREATE operation and set the file path to your desired location. -In the below example, the artifact will be stored at outputs/newfile.txt. You can overwrite existing files with overwrite=true.

-
spec:
-  # ...
-  outputs:
-    artifacts:
-    - name: my-art
-    path: /my-artifact
-    http:
-      url: "https://mywebhdfsprovider.com/webhdfs/v1/outputs/newfile.txt?op=CREATE&overwrite=true"
-
-

Authentication

-

The above examples show minimal use cases without authentication. However, in a real-world scenario, you may want to use authentication. -The authentication mechanism is limited to those supported by HTTP artifacts:

-
    -
  • HTTP Basic Auth
  • -
  • OAuth2
  • -
  • Client Certificates
  • -
-

Examples for the latter two mechanisms can be found in the full webHDFS example.

-
-

Provider dependent

-

While your webHDFS provider may support the above mechanisms, Hadoop itself only supports authentication via Kerberos SPNEGO and Hadoop delegation token. HTTP artifacts do not currently support SPNEGO, but delegation tokens can be used via the delegation query parameter.

-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/use-cases/webhdfs/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/variables/index.html b/variables/index.html index 4ab7358ceede..bb0be2be8e9a 100644 --- a/variables/index.html +++ b/variables/index.html @@ -1,4794 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Variables - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Variables - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Variables

-

Some fields in a workflow specification allow for variable references which are automatically substituted by Argo.

-

How to use variables

-

Variables are enclosed in curly braces:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: hello-world-parameters-
-spec:
-  entrypoint: whalesay
-  arguments:
-    parameters:
-      - name: message
-        value: hello world
-  templates:
-    - name: whalesay
-      inputs:
-        parameters:
-          - name: message
-      container:
-        image: docker/whalesay
-        command: [ cowsay ]
-        args: [ "{{inputs.parameters.message}}" ]
-
-

The following variables are made available to reference various meta-data of a workflow:

-

Template Tag Kinds

-

There are two kinds of template tag:

-
    -
  • simple The default, e.g. {{workflow.name}}
  • -
  • expression Where{{ is immediately followed by =, e.g. {{=workflow.name}}.
  • -
-

Simple

-

The tag is substituted with the variable that has a name the same as the tag.

-

Simple tags may have white-space between the brackets and variable as seen below. However, there is a known issue where variables may fail to interpolate with white-space, so it is recommended to avoid using white-space until this issue is resolved. Please report unexpected behavior with reproducible examples.

-
args: [ "{{ inputs.parameters.message }}" ]
-
-

Expression

-
-

Since v3.1

-
-

The tag is substituted with the result of evaluating the tag as an expression.

-

Note that any hyphenated parameter names or step names will cause a parsing error. You can reference them by -indexing into the parameter or step map, e.g. inputs.parameters['my-param'] or steps['my-step'].outputs.result.

-

Learn about the expression syntax.

-

Examples

-

Plain list:

-
[1, 2]
-
-

Filter a list:

-
filter([1, 2], { # > 1})
-
-

Map a list:

-
map([1, 2], { # * 2 })
-
-

We provide some core functions:

-

Cast to int:

-
asInt(inputs.parameters['my-int-param'])
-
-

Cast to float:

-
asFloat(inputs.parameters['my-float-param'])
-
-

Cast to string:

-
string(1)
-
-

Convert to a JSON string (needed for withParam):

-
toJson([1, 2])
-
-

Extract data from JSON:

-
jsonpath(inputs.parameters.json, '$.some.path')
-
-

You can also use Sprig functions:

-

Trim a string:

-
sprig.trim(inputs.parameters['my-string-param'])
-
-
-

Sprig error handling

-

Sprig functions often do not raise errors. -For example, if int is used on an invalid value, it returns 0. -Please review the Sprig documentation to understand which functions raise errors and which do not.

-
-

Reference

-

All Templates

- - - - - - - - - - - - - - - - - - - - - - - - - -
VariableDescription
inputs.parameters.<NAME>Input parameter to a template
inputs.parametersAll input parameters to a template as a JSON string
inputs.artifacts.<NAME>Input artifact to a template
node.nameFull name of the node
-

Steps Templates

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VariableDescription
steps.nameName of the step
steps.<STEPNAME>.idunique id of container step
steps.<STEPNAME>.ipIP address of a previous daemon container step
steps.<STEPNAME>.statusPhase status of any previous step
steps.<STEPNAME>.exitCodeExit code of any previous script or container step
steps.<STEPNAME>.startedAtTime-stamp when the step started
steps.<STEPNAME>.finishedAtTime-stamp when the step finished
steps.<TASKNAME>.hostNodeNameHost node where task ran (available from version 3.5)
steps.<STEPNAME>.outputs.resultOutput result of any previous container or script step
steps.<STEPNAME>.outputs.parametersWhen the previous step uses withItems or withParams, this contains a JSON array of the output parameter maps of each invocation
steps.<STEPNAME>.outputs.parameters.<NAME>Output parameter of any previous step. When the previous step uses withItems or withParams, this contains a JSON array of the output parameter values of each invocation
steps.<STEPNAME>.outputs.artifacts.<NAME>Output artifact of any previous step
-

DAG Templates

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VariableDescription
tasks.nameName of the task
tasks.<TASKNAME>.idunique id of container task
tasks.<TASKNAME>.ipIP address of a previous daemon container task
tasks.<TASKNAME>.statusPhase status of any previous task
tasks.<TASKNAME>.exitCodeExit code of any previous script or container task
tasks.<TASKNAME>.startedAtTime-stamp when the task started
tasks.<TASKNAME>.finishedAtTime-stamp when the task finished
tasks.<TASKNAME>.hostNodeNameHost node where task ran (available from version 3.5)
tasks.<TASKNAME>.outputs.resultOutput result of any previous container or script task
tasks.<TASKNAME>.outputs.parametersWhen the previous task uses withItems or withParams, this contains a JSON array of the output parameter maps of each invocation
tasks.<TASKNAME>.outputs.parameters.<NAME>Output parameter of any previous task. When the previous task uses withItems or withParams, this contains a JSON array of the output parameter values of each invocation
tasks.<TASKNAME>.outputs.artifacts.<NAME>Output artifact of any previous task
-

HTTP Templates

-
-

Since v3.3

-
-

Only available for successCondition

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VariableDescription
request.methodRequest method (string)
request.urlRequest URL (string)
request.bodyRequest body (string)
request.headersRequest headers (map[string][]string)
response.statusCodeResponse status code (int)
response.bodyResponse body (string)
response.headersResponse headers (map[string][]string)
-

RetryStrategy

-

When using the expression field within retryStrategy, special variables are available.

- - - - - - - - - - - - - - - - - - - - - - - - - -
VariableDescription
lastRetry.exitCodeExit code of the last retry
lastRetry.statusStatus of the last retry
lastRetry.durationDuration in seconds of the last retry
lastRetry.messageMessage output from the last retry (available from version 3.5)
-

Note: These variables evaluate to a string type. If using advanced expressions, either cast them to int values (expression: "{{=asInt(lastRetry.exitCode) >= 2}}") or compare them to string values (expression: "{{=lastRetry.exitCode != '2'}}").

-

Container/Script Templates

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VariableDescription
pod.namePod name of the container/script
retriesThe retry number of the container/script if retryStrategy is specified
inputs.artifacts.<NAME>.pathLocal path of the input artifact
outputs.artifacts.<NAME>.pathLocal path of the output artifact
outputs.parameters.<NAME>.pathLocal path of the output parameter
-

Loops (withItems / withParam)

- - - - - - - - - - - - - - - - - -
VariableDescription
itemValue of the item in a list
item.<FIELDNAME>Field value of the item in a list of maps
-

Metrics

-

When emitting custom metrics in a template, special variables are available that allow self-reference to the current -step.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VariableDescription
statusPhase status of the metric-emitting template
durationDuration of the metric-emitting template in seconds (only applicable in Template-level metrics, for Workflow-level use workflow.duration)
exitCodeExit code of the metric-emitting template
inputs.parameters.<NAME>Input parameter of the metric-emitting template
outputs.parameters.<NAME>Output parameter of the metric-emitting template
outputs.resultOutput result of the metric-emitting template
resourcesDuration.{cpu,memory}Resources duration in seconds. Must be one of resourcesDuration.cpu or resourcesDuration.memory, if available. For more info, see the Resource Duration doc.
retriesRetried count by retry strategy
-

Real-Time Metrics

-

Some variables can be emitted in real-time (as opposed to just when the step/task completes). To emit these variables in -real time, set realtime: true under gauge (note: only Gauge metrics allow for real time variable emission). Metrics -currently available for real time emission:

-

For Workflow-level metrics:

-
    -
  • workflow.duration
  • -
-

For Template-level metrics:

-
    -
  • duration
  • -
-

Global

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
VariableDescription
workflow.nameWorkflow name
workflow.namespaceWorkflow namespace
workflow.mainEntrypointWorkflow's initial entrypoint
workflow.serviceAccountNameWorkflow service account name
workflow.uidWorkflow UID. Useful for setting ownership reference to a resource, or a unique artifact location
workflow.parameters.<NAME>Input parameter to the workflow
workflow.parametersAll input parameters to the workflow as a JSON string (this is deprecated in favor of workflow.parameters.json as this doesn't work with expression tags and that does)
workflow.parameters.jsonAll input parameters to the workflow as a JSON string
workflow.outputs.parameters.<NAME>Global parameter in the workflow
workflow.outputs.artifacts.<NAME>Global artifact in the workflow
workflow.annotations.<NAME>Workflow annotations
workflow.annotations.jsonall Workflow annotations as a JSON string
workflow.labels.<NAME>Workflow labels
workflow.labels.jsonall Workflow labels as a JSON string
workflow.creationTimestampWorkflow creation time-stamp formatted in RFC 3339 (e.g. 2018-08-23T05:42:49Z)
workflow.creationTimestamp.<STRFTIMECHAR>Creation time-stamp formatted with a strftime format character.
workflow.creationTimestamp.RFC3339Creation time-stamp formatted with in RFC 3339.
workflow.priorityWorkflow priority
workflow.durationWorkflow duration estimate in seconds, may differ from actual duration by a couple of seconds
workflow.scheduledTimeScheduled runtime formatted in RFC 3339 (only available for CronWorkflow)
-

Exit Handler

- - - - - - - - - - - - - - - - - -
VariableDescription
workflow.statusWorkflow status. One of: Succeeded, Failed, Error
workflow.failuresA list of JSON objects containing information about nodes that failed or errored during execution. Available fields: displayName, message, templateName, phase, podName, and finishedAt.
-

Knowing where you are

-

The idea with creating a WorkflowTemplate is that they are reusable bits of code you will use in many actual Workflows. Sometimes it is useful to know which workflow you are part of.

-

workflow.mainEntrypoint is one way you can do this. If each of your actual workflows has a differing entrypoint, you can identify the workflow you're part of. Given this use in a WorkflowTemplate:

-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: say-main-entrypoint
-spec:
-  entrypoint: echo
-  templates:
-  - name: echo
-    container:
-      image: alpine
-      command: [echo]
-      args: ["{{workflow.mainEntrypoint}}"]
-
-

I can distinguish my caller:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: foo-
-spec:
-  entrypoint: foo
-  templates:
-    - name: foo
-      steps:
-      - - name: step
-          templateRef:
-            name: say-main-entrypoint
-            template: echo
-
-

results in a log of foo

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: bar-
-spec:
-  entrypoint: bar
-  templates:
-    - name: bar
-      steps:
-      - - name: step
-          templateRef:
-            name: say-main-entrypoint
-            template: echo
-
-

results in a log of bar

-

This shouldn't be that helpful in logging, you should be able to identify workflows through other labels in your cluster's log tool, but can be helpful when generating metrics for the workflow for example.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/variables/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/argo-cli/index.html b/walk-through/argo-cli/index.html index 2150773311ec..bbca8f90eb1d 100644 --- a/walk-through/argo-cli/index.html +++ b/walk-through/argo-cli/index.html @@ -1,3984 +1,11 @@ - - - + - - - - - - - - - - - - Argo CLI - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Argo CLI - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Argo CLI

-

Installation

-

To install the Argo CLI, follow the instructions on the GitHub Releases page.

-

Usage

-

In case you want to follow along with this walk-through, here's a quick overview of the most useful argo command line interface (CLI) commands.

-
argo submit hello-world.yaml    # submit a workflow spec to Kubernetes
-argo list                       # list current workflows
-argo get hello-world-xxx        # get info about a specific workflow
-argo logs hello-world-xxx       # print the logs from a workflow
-argo delete hello-world-xxx     # delete workflow
-
-

You can also run workflow specs directly using kubectl, but the Argo CLI provides syntax checking, nicer output, and requires less typing.

-

See the CLI Reference for more details.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/argo-cli/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/artifacts/index.html b/walk-through/artifacts/index.html index 4811fba8668d..76cb328e5b2b 100644 --- a/walk-through/artifacts/index.html +++ b/walk-through/artifacts/index.html @@ -1,4259 +1,11 @@ - - - + - - - - - - - - - - - - Artifacts - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Artifacts - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Artifacts

-
-

Note

-

You will need to configure an artifact repository to run this example.

-
-

When running workflows, it is very common to have steps that generate or consume artifacts. Often, the output artifacts of one step may be used as input artifacts to a subsequent step.

-

The below workflow spec consists of two steps that run in sequence. The first step named generate-artifact will generate an artifact using the whalesay template that will be consumed by the second step named print-message that then consumes the generated artifact.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: artifact-passing-
-spec:
-  entrypoint: artifact-example
-  templates:
-  - name: artifact-example
-    steps:
-    - - name: generate-artifact
-        template: whalesay
-    - - name: consume-artifact
-        template: print-message
-        arguments:
-          artifacts:
-          # bind message to the hello-art artifact
-          # generated by the generate-artifact step
-          - name: message
-            from: "{{steps.generate-artifact.outputs.artifacts.hello-art}}"
-
-  - name: whalesay
-    container:
-      image: docker/whalesay:latest
-      command: [sh, -c]
-      args: ["cowsay hello world | tee /tmp/hello_world.txt"]
-    outputs:
-      artifacts:
-      # generate hello-art artifact from /tmp/hello_world.txt
-      # artifacts can be directories as well as files
-      - name: hello-art
-        path: /tmp/hello_world.txt
-
-  - name: print-message
-    inputs:
-      artifacts:
-      # unpack the message input artifact
-      # and put it at /tmp/message
-      - name: message
-        path: /tmp/message
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["cat /tmp/message"]
-
-

The whalesay template uses the cowsay command to generate a file named /tmp/hello-world.txt. It then outputs this file as an artifact named hello-art. In general, the artifact's path may be a directory rather than just a file. The print-message template takes an input artifact named message, unpacks it at the path named /tmp/message and then prints the contents of /tmp/message using the cat command. -The artifact-example template passes the hello-art artifact generated as an output of the generate-artifact step as the message input artifact to the print-message step. DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-artifact.outputs.artifacts.hello-art}}.

-

Optionally, for large artifacts, you can set podSpecPatch in the workflow spec to increase the resource request for the init container and avoid any Out of memory issues.

-
<... snipped ...>
-  - name: large-artifact
-    # below patch gets merged with the actual pod spec and increses the memory
-    # request of the init container.
-    podSpecPatch: |
-      initContainers:
-        - name: init
-          resources:
-            requests:
-              memory: 2Gi
-              cpu: 300m
-    inputs:
-      artifacts:
-      - name: data
-        path: /tmp/large-file
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["cat /tmp/large-file"]
-<... snipped ...>
-
-

Artifacts are packaged as Tarballs and gzipped by default. You may customize this behavior by specifying an archive strategy, using the archive field. For example:

-
<... snipped ...>
-    outputs:
-      artifacts:
-        # default behavior - tar+gzip default compression.
-      - name: hello-art-1
-        path: /tmp/hello_world.txt
-
-        # disable archiving entirely - upload the file / directory as is.
-        # this is useful when the container layout matches the desired target repository layout.   
-      - name: hello-art-2
-        path: /tmp/hello_world.txt
-        archive:
-          none: {}
-
-        # customize the compression behavior (disabling it here).
-        # this is useful for files with varying compression benefits, 
-        # e.g. disabling compression for a cached build workspace and large binaries, 
-        # or increasing compression for "perfect" textual data - like a json/xml export of a large database.
-      - name: hello-art-3
-        path: /tmp/hello_world.txt
-        archive:
-          tar:
-            # no compression (also accepts the standard gzip 1 to 9 values)
-            compressionLevel: 0
-<... snipped ...>
-
-

Artifact Garbage Collection

-

As of version 3.4 you can configure your Workflow to automatically delete Artifacts that you don't need (visit artifact repository capability for the current supported store engine).

-

Artifacts can be deleted OnWorkflowCompletion or OnWorkflowDeletion. You can specify your Garbage Collection strategy on both the Workflow level and the Artifact level, so for example, you may have temporary artifacts that can be deleted right away but a final output that should be persisted:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: artifact-gc-
-spec:
-  entrypoint: main
-  artifactGC:
-    strategy: OnWorkflowDeletion  # default Strategy set here applies to all Artifacts by default
-  templates:
-    - name: main
-      container:
-        image: argoproj/argosay:v2
-        command:
-          - sh
-          - -c
-        args:
-          - |
-            echo "can throw this away" > /tmp/temporary-artifact.txt
-            echo "keep this" > /tmp/keep-this.txt
-      outputs:
-        artifacts:
-          - name: temporary-artifact
-            path: /tmp/temporary-artifact.txt
-            s3:
-              key: temporary-artifact.txt
-          - name: keep-this
-            path: /tmp/keep-this.txt
-            s3:
-              key: keep-this.txt
-            artifactGC:
-              strategy: Never   # optional override for an Artifact
-
-

Artifact Naming

-

Consider parameterizing your S3 keys by {{workflow.uid}}, etc (as shown in the example above) if there's a possibility that you could have concurrent Workflows of the same spec. This would be to avoid a scenario in which the artifact from one Workflow is being deleted while the same S3 key is being generated for a different Workflow.

-

Service Accounts and Annotations

-

Does your S3 bucket require you to run with a special Service Account or IAM Role Annotation? You can either use the same ones you use for creating artifacts or generate new ones that are specific for deletion permission. Generally users will probably just have a single Service Account or IAM Role to apply to all artifacts for the Workflow, but you can also customize on the artifact level if you need that:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: artifact-gc-
-spec:
-  entrypoint: main
-  artifactGC:
-    strategy: OnWorkflowDeletion 
-    ##############################################################################################
-    #    Workflow Level Service Account and Metadata
-    ##############################################################################################
-    serviceAccountName: my-sa
-    podMetadata:
-      annotations:
-        eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/my-iam-role
-  templates:
-    - name: main
-      container:
-        image: argoproj/argosay:v2
-        command:
-          - sh
-          - -c
-        args:
-          - |
-            echo "can throw this away" > /tmp/temporary-artifact.txt
-            echo "keep this" > /tmp/keep-this.txt
-      outputs:
-        artifacts:
-          - name: temporary-artifact
-            path: /tmp/temporary-artifact.txt
-            s3:
-              key: temporary-artifact-{{workflow.uid}}.txt
-            artifactGC:
-              ####################################################################################
-              #    Optional override capability
-              ####################################################################################
-              serviceAccountName: artifact-specific-sa
-              podMetadata:
-                annotations:
-                  eks.amazonaws.com/role-arn: arn:aws:iam::111122223333:role/artifact-specific-iam-role
-          - name: keep-this
-            path: /tmp/keep-this.txt
-            s3:
-              key: keep-this-{{workflow.uid}}.txt
-            artifactGC:
-              strategy: Never
-
-

If you do supply your own Service Account you will need to create a RoleBinding that binds it with a role like this:

-
apiVersion: rbac.authorization.k8s.io/v1
-kind: Role
-metadata:
-  annotations:
-    workflows.argoproj.io/description: |
-      This is the minimum recommended permissions needed if you want to use artifact GC.
-  name: artifactgc
-rules:
-- apiGroups:
-  - argoproj.io
-  resources:
-  - workflowartifactgctasks
-  verbs:
-  - list
-  - watch
-- apiGroups:
-  - argoproj.io
-  resources:
-  - workflowartifactgctasks/status
-  verbs:
-  - patch
-
-

This is the artifactgc role if you installed using one of the quick-start manifest files. If you installed with the install.yaml file for the release then the same permissions are in the argo-cluster-role.

-

If you don't use your own ServiceAccount and are just using default ServiceAccount, then the role needs a RoleBinding or ClusterRoleBinding to default ServiceAccount.

-

What happens if Garbage Collection fails?

-

If deletion of the artifact fails for some reason (other than the Artifact already having been deleted which is not considered a failure), the Workflow's Status will be marked with a new Condition to indicate "Artifact GC Failure", a Kubernetes Event will be issued, and the Argo Server UI will also indicate the failure. For additional debugging, the user should find 1 or more Pods named <wfName>-artgc-* and can view the logs.

-

If the user needs to delete the Workflow and its child CRD objects, they will need to patch the Workflow to remove the finalizer preventing the deletion:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-  finalizers:
-  - workflows.argoproj.io/artifact-gc
-
-

The finalizer can be deleted by doing:

-
kubectl patch workflow my-wf \
-    --type json \
-    --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'
-
-

Or for simplicity use the Argo CLI argo delete command with flag --force, which under the hood removes the finalizer before performing the deletion.

-

Release Versions >= 3.5

-

A flag has been added to the Workflow Spec called forceFinalizerRemoval (see here) to force the finalizer's removal even if Artifact GC fails:

-
spec:
-  artifactGC:
-    strategy: OnWorkflowDeletion 
-    forceFinalizerRemoval: true
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/artifacts/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/conditionals/index.html b/walk-through/conditionals/index.html index 43335a264910..de5968d77d0f 100644 --- a/walk-through/conditionals/index.html +++ b/walk-through/conditionals/index.html @@ -1,3987 +1,11 @@ - - - + - - - - - - - - - - - - Conditionals - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Conditionals - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Conditionals

-

We also support conditional execution. The syntax is implemented by govaluate which offers the support for complex syntax. See in the example:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: coinflip-
-spec:
-  entrypoint: coinflip
-  templates:
-  - name: coinflip
-    steps:
-    # flip a coin
-    - - name: flip-coin
-        template: flip-coin
-    # evaluate the result in parallel
-    - - name: heads
-        template: heads                       # call heads template if "heads"
-        when: "{{steps.flip-coin.outputs.result}} == heads"
-      - name: tails
-        template: tails                       # call tails template if "tails"
-        when: "{{steps.flip-coin.outputs.result}} == tails"
-    - - name: flip-again
-        template: flip-coin
-    - - name: complex-condition
-        template: heads-tails-or-twice-tails
-        # call heads template if first flip was "heads" and second was "tails" OR both were "tails"
-        when: >-
-            ( {{steps.flip-coin.outputs.result}} == heads &&
-              {{steps.flip-again.outputs.result}} == tails
-            ) ||
-            ( {{steps.flip-coin.outputs.result}} == tails &&
-              {{steps.flip-again.outputs.result}} == tails )
-      - name: heads-regex
-        template: heads                       # call heads template if ~ "hea"
-        when: "{{steps.flip-again.outputs.result}} =~ hea"
-      - name: tails-regex
-        template: tails                       # call heads template if ~ "tai"
-        when: "{{steps.flip-again.outputs.result}} =~ tai"
-
-  # Return heads or tails based on a random number
-  - name: flip-coin
-    script:
-      image: python:alpine3.6
-      command: [python]
-      source: |
-        import random
-        result = "heads" if random.randint(0,1) == 0 else "tails"
-        print(result)
-
-  - name: heads
-    container:
-      image: alpine:3.6
-      command: [sh, -c]
-      args: ["echo \"it was heads\""]
-
-  - name: tails
-    container:
-      image: alpine:3.6
-      command: [sh, -c]
-      args: ["echo \"it was tails\""]
-
-  - name: heads-tails-or-twice-tails
-    container:
-      image: alpine:3.6
-      command: [sh, -c]
-      args: ["echo \"it was heads the first flip and tails the second. Or it was two times tails.\""]
-
-
-

Nested Quotes

-

If the parameter value contains quotes, it may invalidate the govaluate expression. -To handle parameters with quotes, embed an expr expression in the conditional. -For example:

-
- -
when: "{{=inputs.parameters['may-contain-quotes'] == 'example'}}"
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/conditionals/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/continuous-integration-examples/index.html b/walk-through/continuous-integration-examples/index.html index fc0f14e1f16f..61a127827014 100644 --- a/walk-through/continuous-integration-examples/index.html +++ b/walk-through/continuous-integration-examples/index.html @@ -1,3924 +1,11 @@ - - - + - - - - - - - - - - - - Continuous Integration Examples - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Continuous Integration Examples - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Continuous Integration Examples

-

Continuous integration is a popular application for workflows.

-

Some quick examples of CI workflows:

- -

And a CI WorkflowTemplate example:

- -

A more detailed example is https://github.com/sendible-labs/argo-workflows-ci-example, which allows you to -create a local CI workflow for the purposes of learning.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/continuous-integration-examples/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/custom-template-variable-reference/index.html b/walk-through/custom-template-variable-reference/index.html index 4705a4cea20f..a13e73222c7c 100644 --- a/walk-through/custom-template-variable-reference/index.html +++ b/walk-through/custom-template-variable-reference/index.html @@ -1,3947 +1,11 @@ - - - + - - - - - - - - - - - - Custom Template Variable Reference - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Custom Template Variable Reference - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Custom Template Variable Reference

-

In this example, we can see how we can use the other template language variable reference (E.g: Jinja) in Argo workflow template. -Argo will validate and resolve only the variable that starts with an Argo allowed prefix -{"item", "steps", "inputs", "outputs", "workflow", "tasks"}

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: custom-template-variable-
-spec:
-  entrypoint: hello-hello-hello
-
-  templates:
-    - name: hello-hello-hello
-      steps:
-        - - name: hello1
-            template: whalesay
-            arguments:
-              parameters: [{name: message, value: "hello1"}]
-        - - name: hello2a
-            template: whalesay
-            arguments:
-              parameters: [{name: message, value: "hello2a"}]
-          - name: hello2b
-            template: whalesay
-            arguments:
-              parameters: [{name: message, value: "hello2b"}]
-
-    - name: whalesay
-      inputs:
-        parameters:
-          - name: message
-      container:
-        image: docker/whalesay
-        command: [cowsay]
-        args: ["{{user.username}}"]
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/custom-template-variable-reference/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/daemon-containers/index.html b/walk-through/daemon-containers/index.html index 57f63f58c103..cc5ff7a60531 100644 --- a/walk-through/daemon-containers/index.html +++ b/walk-through/daemon-containers/index.html @@ -1,3985 +1,11 @@ - - - + - - - - - - - - - - - - Daemon Containers - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Daemon Containers - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Daemon Containers

-

Argo workflows can start containers that run in the background (also known as daemon containers) while the workflow itself continues execution. Note that the daemons will be automatically destroyed when the workflow exits the template scope in which the daemon was invoked. Daemon containers are useful for starting up services to be tested or to be used in testing (e.g., fixtures). We also find it very useful when running large simulations to spin up a database as a daemon for collecting and organizing the results. The big advantage of daemons compared with sidecars is that their existence can persist across multiple steps or even the entire workflow.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: daemon-step-
-spec:
-  entrypoint: daemon-example
-  templates:
-  - name: daemon-example
-    steps:
-    - - name: influx
-        template: influxdb              # start an influxdb as a daemon (see the influxdb template spec below)
-
-    - - name: init-database             # initialize influxdb
-        template: influxdb-client
-        arguments:
-          parameters:
-          - name: cmd
-            value: curl -XPOST 'http://{{steps.influx.ip}}:8086/query' --data-urlencode "q=CREATE DATABASE mydb"
-
-    - - name: producer-1                # add entries to influxdb
-        template: influxdb-client
-        arguments:
-          parameters:
-          - name: cmd
-            value: for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d "cpu,host=server01,region=uswest load=$i" ; sleep .5 ; done
-      - name: producer-2                # add entries to influxdb
-        template: influxdb-client
-        arguments:
-          parameters:
-          - name: cmd
-            value: for i in $(seq 1 20); do curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d "cpu,host=server02,region=uswest load=$((RANDOM % 100))" ; sleep .5 ; done
-      - name: producer-3                # add entries to influxdb
-        template: influxdb-client
-        arguments:
-          parameters:
-          - name: cmd
-            value: curl -XPOST 'http://{{steps.influx.ip}}:8086/write?db=mydb' -d 'cpu,host=server03,region=useast load=15.4'
-
-    - - name: consumer                  # consume intries from influxdb
-        template: influxdb-client
-        arguments:
-          parameters:
-          - name: cmd
-            value: curl --silent -G http://{{steps.influx.ip}}:8086/query?pretty=true --data-urlencode "db=mydb" --data-urlencode "q=SELECT * FROM cpu"
-
-  - name: influxdb
-    daemon: true                        # start influxdb as a daemon
-    retryStrategy:
-      limit: 10                         # retry container if it fails
-    container:
-      image: influxdb:1.2
-      command:
-      - influxd
-      readinessProbe:                   # wait for readinessProbe to succeed
-        httpGet:
-          path: /ping
-          port: 8086
-
-  - name: influxdb-client
-    inputs:
-      parameters:
-      - name: cmd
-    container:
-      image: appropriate/curl:latest
-      command: ["/bin/sh", "-c"]
-      args: ["{{inputs.parameters.cmd}}"]
-      resources:
-        requests:
-          memory: 32Mi
-          cpu: 100m
-
-

Step templates use the steps prefix to refer to another step: for example {{steps.influx.ip}}. In DAG templates, the tasks prefix is used instead: for example {{tasks.influx.ip}}.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/daemon-containers/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/dag/index.html b/walk-through/dag/index.html index 3df6d369a8f1..7cf36dbe083e 100644 --- a/walk-through/dag/index.html +++ b/walk-through/dag/index.html @@ -1,4022 +1,11 @@ - - - + - - - - - - - - - - - - DAG - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + DAG - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

DAG

-

As an alternative to specifying sequences of steps, you can define a workflow as a directed-acyclic graph (DAG) by specifying the dependencies of each task. -DAGs can be simpler to maintain for complex workflows and allow for maximum parallelism when running tasks.

-

In the following workflow, step A runs first, as it has no dependencies. -Once A has finished, steps B and C run in parallel. -Finally, once B and C have completed, step D runs.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: dag-diamond-
-spec:
-  entrypoint: diamond
-  templates:
-  - name: echo
-    inputs:
-      parameters:
-      - name: message
-    container:
-      image: alpine:3.7
-      command: [echo, "{{inputs.parameters.message}}"]
-  - name: diamond
-    dag:
-      tasks:
-      - name: A
-        template: echo
-        arguments:
-          parameters: [{name: message, value: A}]
-      - name: B
-        dependencies: [A]
-        template: echo
-        arguments:
-          parameters: [{name: message, value: B}]
-      - name: C
-        dependencies: [A]
-        template: echo
-        arguments:
-          parameters: [{name: message, value: C}]
-      - name: D
-        dependencies: [B, C]
-        template: echo
-        arguments:
-          parameters: [{name: message, value: D}]
-
-

The dependency graph may have multiple roots. -The templates called from a DAG or steps template can themselves be DAG or steps templates, allowing complex workflows to be split into manageable pieces.

-

Enhanced Depends

-

For more complicated, conditional dependencies, you can use the Enhanced Depends feature.

-

Fail Fast

-

By default, DAGs fail fast: when one task fails, no new tasks will be scheduled. -Once all running tasks are completed, the DAG will be marked as failed.

-

If failFast is set to false for a DAG, all branches will run to completion, regardless of failures in other branches.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/dag/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/docker-in-docker-using-sidecars/index.html b/walk-through/docker-in-docker-using-sidecars/index.html index 7b1d92e56fbf..349d4da96a27 100644 --- a/walk-through/docker-in-docker-using-sidecars/index.html +++ b/walk-through/docker-in-docker-using-sidecars/index.html @@ -1,3945 +1,11 @@ - - - + - - - - - - - - - - - - Docker-in-Docker Using Sidecars - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Docker-in-Docker Using Sidecars - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Docker-in-Docker Using Sidecars

-

Note: It is increasingly unlikely that the below example will work for you on your version of Kubernetes. Since Kubernetes 1.24, the dockershim has been unavailable as part of Kubernetes, rendering Docker-in-Docker unworkable. It is recommended to seek alternative methods of building containers, such as Kaniko or Buildkit. A Buildkit Workflow example is available in the examples directory of the Argo Workflows repository.

-
-

An application of sidecars is to implement Docker-in-Docker (DIND). DIND is useful when you want to run Docker commands from inside a container. For example, you may want to build and push a container image from inside your build container. In the following example, we use the docker:dind image to run a Docker daemon in a sidecar and give the main container access to the daemon.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: sidecar-dind-
-spec:
-  entrypoint: dind-sidecar-example
-  templates:
-  - name: dind-sidecar-example
-    container:
-      image: docker:19.03.13
-      command: [sh, -c]
-      args: ["until docker ps; do sleep 3; done; docker run --rm debian:latest cat /etc/os-release"]
-      env:
-      - name: DOCKER_HOST               # the docker daemon can be access on the standard port on localhost
-        value: 127.0.0.1
-    sidecars:
-    - name: dind
-      image: docker:19.03.13-dind          # Docker already provides an image for running a Docker daemon
-      command: [dockerd-entrypoint.sh]
-      env:
-        - name: DOCKER_TLS_CERTDIR         # Docker TLS env config
-          value: ""
-      securityContext:
-        privileged: true                # the Docker daemon can only run in a privileged container
-      # mirrorVolumeMounts will mount the same volumes specified in the main container
-      # to the sidecar (including artifacts), at the same mountPaths. This enables
-      # dind daemon to (partially) see the same filesystem as the main container in
-      # order to use features such as docker volume binding.
-      mirrorVolumeMounts: true
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/docker-in-docker-using-sidecars/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/exit-handlers/index.html b/walk-through/exit-handlers/index.html index 8ee0421e7a92..e141db11d386 100644 --- a/walk-through/exit-handlers/index.html +++ b/walk-through/exit-handlers/index.html @@ -1,3965 +1,11 @@ - - - + - - - - - - - - - - - - Exit handlers - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Exit handlers - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Exit handlers

-

An exit handler is a template that always executes, irrespective of success or failure, at the end of the workflow.

-

Some common use cases of exit handlers are:

-
    -
  • cleaning up after a workflow runs
  • -
  • sending notifications of workflow status (e.g., e-mail/Slack)
  • -
  • posting the pass/fail status to a web-hook result (e.g. GitHub build result)
  • -
  • resubmitting or submitting another workflow
  • -
-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: exit-handlers-
-spec:
-  entrypoint: intentional-fail
-  onExit: exit-handler                  # invoke exit-handler template at end of the workflow
-  templates:
-  # primary workflow template
-  - name: intentional-fail
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo intentional failure; exit 1"]
-
-  # Exit handler templates
-  # After the completion of the entrypoint template, the status of the
-  # workflow is made available in the global variable {{workflow.status}}.
-  # {{workflow.status}} will be one of: Succeeded, Failed, Error
-  - name: exit-handler
-    steps:
-    - - name: notify
-        template: send-email
-      - name: celebrate
-        template: celebrate
-        when: "{{workflow.status}} == Succeeded"
-      - name: cry
-        template: cry
-        when: "{{workflow.status}} != Succeeded"
-  - name: send-email
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo send e-mail: {{workflow.name}} {{workflow.status}} {{workflow.duration}}"]
-  - name: celebrate
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo hooray!"]
-  - name: cry
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo boohoo!"]
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/exit-handlers/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/hardwired-artifacts/index.html b/walk-through/hardwired-artifacts/index.html index f27671d6d86d..d4b67341e759 100644 --- a/walk-through/hardwired-artifacts/index.html +++ b/walk-through/hardwired-artifacts/index.html @@ -1,3954 +1,11 @@ - - - + - - - - - - - - - - - - Hardwired Artifacts - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Hardwired Artifacts - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Hardwired Artifacts

-

You can use any container image to generate any kind of artifact. In practice, however, certain types of artifacts are very common, so there is built-in support for git, HTTP, GCS, and S3 artifacts.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: hardwired-artifact-
-spec:
-  entrypoint: hardwired-artifact
-  templates:
-  - name: hardwired-artifact
-    inputs:
-      artifacts:
-      # Check out the main branch of the argo repo and place it at /src
-      # revision can be anything that git checkout accepts: branch, commit, tag, etc.
-      - name: argo-source
-        path: /src
-        git:
-          repo: https://github.com/argoproj/argo-workflows.git
-          revision: "main"
-      # Download kubectl 1.8.0 and place it at /bin/kubectl
-      - name: kubectl
-        path: /bin/kubectl
-        mode: 0755
-        http:
-          url: https://storage.googleapis.com/kubernetes-release/release/v1.8.0/bin/linux/amd64/kubectl
-      # Copy an s3 compatible artifact repository bucket (such as AWS, GCS and MinIO) and place it at /s3
-      - name: objects
-        path: /s3
-        s3:
-          endpoint: storage.googleapis.com
-          bucket: my-bucket-name
-          key: path/in/bucket
-          accessKeySecret:
-            name: my-s3-credentials
-            key: accessKey
-          secretKeySecret:
-            name: my-s3-credentials
-            key: secretKey
-    container:
-      image: debian
-      command: [sh, -c]
-      args: ["ls -l /src /bin/kubectl /s3"]
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/hardwired-artifacts/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/hello-world/index.html b/walk-through/hello-world/index.html index 5222e5637494..cb9801d3ff79 100644 --- a/walk-through/hello-world/index.html +++ b/walk-through/hello-world/index.html @@ -1,3959 +1,11 @@ - - - + - - - - - - - - - - - - Hello World - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Hello World - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Hello World

-

Let's start by creating a very simple workflow template to echo "hello world" using the docker/whalesay container -image from Docker Hub.

-

You can run this directly from your shell with a simple docker command:

-
$ docker run docker/whalesay cowsay "hello world"
- _____________
-< hello world >
- -------------
-    \
-     \
-      \
-                    ##        .
-              ## ## ##       ==
-           ## ## ## ##      ===
-       /""""""""""""""""___/ ===
-  ~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ /  ===- ~~~
-       \______ o          __/
-        \    \        __/
-          \____\______/
-
-
-Hello from Docker!
-This message shows that your installation appears to be working correctly.
-
-

Below, we run the same container on a Kubernetes cluster using an Argo workflow template. Be sure to read the comments -as they provide useful explanations.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow                  # new type of k8s spec
-metadata:
-  generateName: hello-world-    # name of the workflow spec
-spec:
-  entrypoint: whalesay          # invoke the whalesay template
-  templates:
-    - name: whalesay              # name of the template
-      container:
-        image: docker/whalesay
-        command: [ cowsay ]
-        args: [ "hello world" ]
-        resources: # limit the resources
-          limits:
-            memory: 32Mi
-            cpu: 100m
-
-

Argo adds a new kind of Kubernetes spec called a Workflow. The above spec contains a single template -called whalesay which runs the docker/whalesay container and invokes cowsay "hello world". The whalesay template -is the entrypoint for the spec. The entrypoint specifies the initial template that should be invoked when the workflow -spec is executed by Kubernetes. Being able to specify the entrypoint is more useful when there is more than one template -defined in the Kubernetes workflow spec. :-)

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/hello-world/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/index.html b/walk-through/index.html index e95d5d72e2a0..999c05c12fcf 100644 --- a/walk-through/index.html +++ b/walk-through/index.html @@ -1,3922 +1,11 @@ - - - + - - - - - - - - - - - - About - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + About - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

About

-

Argo is implemented as a Kubernetes CRD (Custom Resource Definition). As a result, Argo workflows can be managed -using kubectl and natively integrates with other Kubernetes services such as volumes, secrets, and RBAC. The new Argo -software is light-weight and installs in under a minute, and provides complete workflow features including parameter -substitution, artifacts, fixtures, loops and recursive workflows.

-

Dozens of examples are available in -the examples directory on GitHub.

-

For a complete description of the Argo workflow spec, please refer -to the spec documentation.

-

Progress through these examples in sequence to learn all the basics.

-

Start with Argo CLI.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/kubernetes-resources/index.html b/walk-through/kubernetes-resources/index.html index 5b7df2ecc1b3..6b451b058673 100644 --- a/walk-through/kubernetes-resources/index.html +++ b/walk-through/kubernetes-resources/index.html @@ -1,3983 +1,11 @@ - - - + - - - - - - - - - - - - Kubernetes Resources - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Kubernetes Resources - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Kubernetes Resources

-

In many cases, you will want to manage Kubernetes resources from Argo workflows. The resource template allows you to create, delete or updated any type of Kubernetes resource.

-
# in a workflow. The resource template type accepts any k8s manifest
-# (including CRDs) and can perform any `kubectl` action against it (e.g. create,
-# apply, delete, patch).
-apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: k8s-jobs-
-spec:
-  entrypoint: pi-tmpl
-  templates:
-  - name: pi-tmpl
-    resource:                   # indicates that this is a resource template
-      action: create            # can be any kubectl action (e.g. create, delete, apply, patch)
-      # The successCondition and failureCondition are optional expressions.
-      # If failureCondition is true, the step is considered failed.
-      # If successCondition is true, the step is considered successful.
-      # They use kubernetes label selection syntax and can be applied against any field
-      # of the resource (not just labels). Multiple AND conditions can be represented by comma
-      # delimited expressions.
-      # For more details: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
-      successCondition: status.succeeded > 0
-      failureCondition: status.failed > 3
-      manifest: |               #put your kubernetes spec here
-        apiVersion: batch/v1
-        kind: Job
-        metadata:
-          generateName: pi-job-
-        spec:
-          template:
-            metadata:
-              name: pi
-            spec:
-              containers:
-              - name: pi
-                image: perl
-                command: ["perl",  "-Mbignum=bpi", "-wle", "print bpi(2000)"]
-              restartPolicy: Never
-          backoffLimit: 4
-
-

Note: -Currently only a single resource can be managed by a resource template so either a generateName or name must be provided in the resource's meta-data.

-

Resources created in this way are independent of the workflow. If you want the resource to be deleted when the workflow is deleted then you can use Kubernetes garbage collection with the workflow resource as an owner reference (example).

-

You can also collect data about the resource in output parameters (see more at k8s-jobs.yaml)

-

Note: -When patching, the resource will accept another attribute, mergeStrategy, which can either be strategic, merge, or json. If this attribute is not supplied, it will default to strategic. Keep in mind that Custom Resources cannot be patched with strategic, so a different strategy must be chosen. For example, suppose you have the CronTab CRD defined, and the following instance of a CronTab:

-
apiVersion: "stable.example.com/v1"
-kind: CronTab
-spec:
-  cronSpec: "* * * * */5"
-  image: my-awesome-cron-image
-
-

This CronTab can be modified using the following Argo Workflow:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: k8s-patch-
-spec:
-  entrypoint: cront-tmpl
-  templates:
-  - name: cront-tmpl
-    resource:
-      action: patch
-      mergeStrategy: merge                 # Must be one of [strategic merge json]
-      manifest: |
-        apiVersion: "stable.example.com/v1"
-        kind: CronTab
-        spec:
-          cronSpec: "* * * * */10"
-          image: my-awesome-cron-image
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/kubernetes-resources/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/loops/index.html b/walk-through/loops/index.html index 82c03707f2dc..74af7d5ed50b 100644 --- a/walk-through/loops/index.html +++ b/walk-through/loops/index.html @@ -1,4233 +1,11 @@ - - - + - - - - - - - - - - - - Loops - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Loops - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Loops

-

When writing workflows, it is often very useful to be able to iterate over a set of inputs, as this is how argo-workflows can perform loops.

-

There are two basic ways of running a template multiple times.

-
    -
  • withItems takes a list of things to work on. Either
      -
    • plain, single values, which are then usable in your template as '{{item}}'
    • -
    • a JSON object where each element in the object can be addressed by it's key as '{{item.key}}'
    • -
    -
  • -
  • withParam takes a JSON array of items, and iterates over it - again the items can be objects like with withItems. This is very powerful, as you can generate the JSON in another step in your workflow, so creating a dynamic workflow.
  • -
-

withItems basic example

-

This example is the simplest. We are taking a basic list of items and iterating over it with withItems. It is limited to one varying field for each of the workflow templates instantiated.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: loops-
-spec:
-  entrypoint: loop-example
-  templates:
-  - name: loop-example
-    steps:
-    - - name: print-message
-        template: whalesay
-        arguments:
-          parameters:
-          - name: message
-            value: "{{item}}"
-        withItems:              # invoke whalesay once for each item in parallel
-        - hello world           # item 1
-        - goodbye world         # item 2
-
-  - name: whalesay
-    inputs:
-      parameters:
-      - name: message
-    container:
-      image: docker/whalesay:latest
-      command: [cowsay]
-      args: ["{{inputs.parameters.message}}"]
-
-

withItems more complex example

-

If we'd like to pass more than one piece of information in each workflow, you can instead use a JSON object for each entry in withItems and then address the elements by key, as shown in this example.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: loops-maps-
-spec:
-  entrypoint: loop-map-example
-  templates:
-  - name: loop-map-example # parameter specifies the list to iterate over
-    steps:
-    - - name: test-linux
-        template: cat-os-release
-        arguments:
-          parameters:
-          - name: image
-            value: "{{item.image}}"
-          - name: tag
-            value: "{{item.tag}}"
-        withItems:
-        - { image: 'debian', tag: '9.1' }       #item set 1
-        - { image: 'debian', tag: '8.9' }       #item set 2
-        - { image: 'alpine', tag: '3.6' }       #item set 3
-        - { image: 'ubuntu', tag: '17.10' }     #item set 4
-
-  - name: cat-os-release
-    inputs:
-      parameters:
-      - name: image
-      - name: tag
-    container:
-      image: "{{inputs.parameters.image}}:{{inputs.parameters.tag}}"
-      command: [cat]
-      args: [/etc/os-release]
-
-

withParam example

-

This example does exactly the same job as the previous example, but using withParam to pass the information as a JSON array argument, instead of hard-coding it into the template.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: loops-param-arg-
-spec:
-  entrypoint: loop-param-arg-example
-  arguments:
-    parameters:
-    - name: os-list                                     # a list of items
-      value: |
-        [
-          { "image": "debian", "tag": "9.1" },
-          { "image": "debian", "tag": "8.9" },
-          { "image": "alpine", "tag": "3.6" },
-          { "image": "ubuntu", "tag": "17.10" }
-        ]
-
-  templates:
-  - name: loop-param-arg-example
-    inputs:
-      parameters:
-      - name: os-list
-    steps:
-    - - name: test-linux
-        template: cat-os-release
-        arguments:
-          parameters:
-          - name: image
-            value: "{{item.image}}"
-          - name: tag
-            value: "{{item.tag}}"
-        withParam: "{{inputs.parameters.os-list}}"      # parameter specifies the list to iterate over
-
-  # This template is the same as in the previous example
-  - name: cat-os-release
-    inputs:
-      parameters:
-      - name: image
-      - name: tag
-    container:
-      image: "{{inputs.parameters.image}}:{{inputs.parameters.tag}}"
-      command: [cat]
-      args: [/etc/os-release]
-
-

withParam example from another step in the workflow

-

Finally, the most powerful form of this is to generate that JSON array of objects dynamically in one step, and then pass it to the next step so that the number and values used in the second step are only calculated at runtime.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: loops-param-result-
-spec:
-  entrypoint: loop-param-result-example
-  templates:
-  - name: loop-param-result-example
-    steps:
-    - - name: generate
-        template: gen-number-list
-    # Iterate over the list of numbers generated by the generate step above
-    - - name: sleep
-        template: sleep-n-sec
-        arguments:
-          parameters:
-          - name: seconds
-            value: "{{item}}"
-        withParam: "{{steps.generate.outputs.result}}"
-
-  # Generate a list of numbers in JSON format
-  - name: gen-number-list
-    script:
-      image: python:alpine3.6
-      command: [python]
-      source: |
-        import json
-        import sys
-        json.dump([i for i in range(20, 31)], sys.stdout)
-
-  - name: sleep-n-sec
-    inputs:
-      parameters:
-      - name: seconds
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo sleeping for {{inputs.parameters.seconds}} seconds; sleep {{inputs.parameters.seconds}}; echo done"]
-
-

Accessing the aggregate results of a loop

-

The output of all iterations can be accessed as a JSON array, once the loop is done. -The example below shows how you can read it.

-

Please note: the output of each iteration must be a valid JSON.

-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: loop-test
-spec:
-  entrypoint: main
-  templates:
-  - name: main
-    steps:
-    - - name: execute-parallel-steps
-        template: print-json-entry
-        arguments:
-          parameters:
-          - name: index
-            value: '{{item}}'
-        withParam: '[1, 2, 3]'
-    - - name: call-access-aggregate-output
-        template: access-aggregate-output
-        arguments:
-          parameters:
-          - name: aggregate-results
-            # If the value of each loop iteration isn't a valid JSON,
-            # you get a JSON parse error:
-            value: '{{steps.execute-parallel-steps.outputs.result}}'
-  - name: print-json-entry
-    inputs:
-      parameters:
-      - name: index
-    # The output must be a valid JSON
-    script:
-      image: alpine:latest
-      command: [sh]
-      source: |
-        cat <<EOF
-        {
-        "input": "{{inputs.parameters.index}}",
-        "transformed-input": "{{inputs.parameters.index}}.jpeg"
-        }
-        EOF
-  - name: access-aggregate-output
-    inputs:
-      parameters:
-      - name: aggregate-results
-        value: 'no-value'
-    script:
-      image: alpine:latest
-      command: [sh]
-      source: |
-        echo 'inputs.parameters.aggregate-results: "{{inputs.parameters.aggregate-results}}"'
-
-

image

-

The last step of the workflow above should have this output: -inputs.parameters.aggregate-results: "[{"input":"1","transformed-input":"1.jpeg"},{"input":"2","transformed-input":"2.jpeg"},{"input":"3","transformed-input":"3.jpeg"}]"

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/loops/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/output-parameters/index.html b/walk-through/output-parameters/index.html index ab91e7272ec1..ed6b1ded271b 100644 --- a/walk-through/output-parameters/index.html +++ b/walk-through/output-parameters/index.html @@ -1,4048 +1,11 @@ - - - + - - - - - - - - - - - - Output Parameters - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Output Parameters - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Output Parameters

-

Output parameters provide a general mechanism to use the result of a step as a parameter (and not just as an artifact). This allows you to use the result from any type of step, not just a script, for conditional tests, loops, and arguments. Output parameters work similarly to script result except that the value of the output parameter is set to the contents of a generated file rather than the contents of stdout.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: output-parameter-
-spec:
-  entrypoint: output-parameter
-  templates:
-  - name: output-parameter
-    steps:
-    - - name: generate-parameter
-        template: whalesay
-    - - name: consume-parameter
-        template: print-message
-        arguments:
-          parameters:
-          # Pass the hello-param output from the generate-parameter step as the message input to print-message
-          - name: message
-            value: "{{steps.generate-parameter.outputs.parameters.hello-param}}"
-
-  - name: whalesay
-    container:
-      image: docker/whalesay:latest
-      command: [sh, -c]
-      args: ["echo -n hello world > /tmp/hello_world.txt"]  # generate the content of hello_world.txt
-    outputs:
-      parameters:
-      - name: hello-param  # name of output parameter
-        valueFrom:
-          path: /tmp/hello_world.txt # set the value of hello-param to the contents of this hello-world.txt
-
-  - name: print-message
-    inputs:
-      parameters:
-      - name: message
-    container:
-      image: docker/whalesay:latest
-      command: [cowsay]
-      args: ["{{inputs.parameters.message}}"]
-
-

DAG templates use the tasks prefix to refer to another task, for example {{tasks.generate-parameter.outputs.parameters.hello-param}}.

-

result output parameter

-

The result output parameter captures standard output. -It is accessible from the outputs map: outputs.result. -Only 256 kb of the standard output stream will be captured.

-

Scripts

-

Outputs of a script are assigned to standard output and captured in the result parameter. More details here.

-

Containers

-

Container steps and tasks also have their standard output captured in the result parameter. -Given a task, called log-int, result would then be accessible as {{ tasks.log-int.outputs.result }}. If using steps, substitute tasks for steps: {{ steps.log-int.outputs.result }}.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/output-parameters/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/parameters/index.html b/walk-through/parameters/index.html index 1aaaad7c5f45..246281081e54 100644 --- a/walk-through/parameters/index.html +++ b/walk-through/parameters/index.html @@ -1,3981 +1,11 @@ - - - + - - - - - - - - - - - - Parameters - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Parameters - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Parameters

-

Let's look at a slightly more complex workflow spec with parameters.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: hello-world-parameters-
-spec:
-  # invoke the whalesay template with
-  # "hello world" as the argument
-  # to the message parameter
-  entrypoint: whalesay
-  arguments:
-    parameters:
-    - name: message
-      value: hello world
-
-  templates:
-  - name: whalesay
-    inputs:
-      parameters:
-      - name: message       # parameter declaration
-    container:
-      # run cowsay with that message input parameter as args
-      image: docker/whalesay
-      command: [cowsay]
-      args: ["{{inputs.parameters.message}}"]
-
-

This time, the whalesay template takes an input parameter named message that is passed as the args to the cowsay command. In order to reference parameters (e.g., "{{inputs.parameters.message}}"), the parameters must be enclosed in double quotes to escape the curly braces in YAML.

-

The argo CLI provides a convenient way to override parameters used to invoke the entrypoint. For example, the following command would bind the message parameter to "goodbye world" instead of the default "hello world".

-
argo submit arguments-parameters.yaml -p message="goodbye world"
-
-

In case of multiple parameters that can be overridden, the argo CLI provides a command to load parameters files in YAML or JSON format. Here is an example of that kind of parameter file:

-
message: goodbye world
-
-

To run use following command:

-
argo submit arguments-parameters.yaml --parameter-file params.yaml
-
-

Command-line parameters can also be used to override the default entrypoint and invoke any template in the workflow spec. For example, if you add a new version of the whalesay template called whalesay-caps but you don't want to change the default entrypoint, you can invoke this from the command line as follows:

-
argo submit arguments-parameters.yaml --entrypoint whalesay-caps
-
-

By using a combination of the --entrypoint and -p parameters, you can call any template in the workflow spec with any parameter that you like.

-

The values set in the spec.arguments.parameters are globally scoped and can be accessed via {{workflow.parameters.parameter_name}}. This can be useful to pass information to multiple steps in a workflow. For example, if you wanted to run your workflows with different logging levels that are set in the environment of each container, you could have a YAML file similar to this one:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: global-parameters-
-spec:
-  entrypoint: A
-  arguments:
-    parameters:
-    - name: log-level
-      value: INFO
-
-  templates:
-  - name: A
-    container:
-      image: containerA
-      env:
-      - name: LOG_LEVEL
-        value: "{{workflow.parameters.log-level}}"
-      command: [runA]
-  - name: B
-    container:
-      image: containerB
-      env:
-      - name: LOG_LEVEL
-        value: "{{workflow.parameters.log-level}}"
-      command: [runB]
-
-

In this workflow, both steps A and B would have the same log-level set to INFO and can easily be changed between workflow submissions using the -p flag.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/parameters/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/recursion/index.html b/walk-through/recursion/index.html index 4900209612fc..0f24fcae91da 100644 --- a/walk-through/recursion/index.html +++ b/walk-through/recursion/index.html @@ -1,3973 +1,11 @@ - - - + - - - - - - - - - - - - Recursion - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Recursion - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Recursion

-

Templates can recursively invoke each other! In this variation of the above coin-flip template, we continue to flip coins until it comes up heads.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: coinflip-recursive-
-spec:
-  entrypoint: coinflip
-  templates:
-  - name: coinflip
-    steps:
-    # flip a coin
-    - - name: flip-coin
-        template: flip-coin
-    # evaluate the result in parallel
-    - - name: heads
-        template: heads                 # call heads template if "heads"
-        when: "{{steps.flip-coin.outputs.result}} == heads"
-      - name: tails                     # keep flipping coins if "tails"
-        template: coinflip
-        when: "{{steps.flip-coin.outputs.result}} == tails"
-
-  - name: flip-coin
-    script:
-      image: python:alpine3.6
-      command: [python]
-      source: |
-        import random
-        result = "heads" if random.randint(0,1) == 0 else "tails"
-        print(result)
-
-  - name: heads
-    container:
-      image: alpine:3.6
-      command: [sh, -c]
-      args: ["echo \"it was heads\""]
-
-

Here's the result of a couple of runs of coin-flip for comparison.

-
argo get coinflip-recursive-tzcb5
-
-STEP                         PODNAME                              MESSAGE
-  coinflip-recursive-vhph5
- ├───✔ flip-coin             coinflip-recursive-vhph5-2123890397
- └─┬─✔ heads                 coinflip-recursive-vhph5-128690560
-   └─○ tails
-
-STEP                          PODNAME                              MESSAGE
-  coinflip-recursive-tzcb5
- ├───✔ flip-coin              coinflip-recursive-tzcb5-322836820
- └─┬─○ heads
-   └─✔ tails
-     ├───✔ flip-coin          coinflip-recursive-tzcb5-1863890320
-     └─┬─○ heads
-       └─✔ tails
-         ├───✔ flip-coin      coinflip-recursive-tzcb5-1768147140
-         └─┬─○ heads
-           └─✔ tails
-             ├───✔ flip-coin  coinflip-recursive-tzcb5-4080411136
-             └─┬─✔ heads      coinflip-recursive-tzcb5-4080323273
-               └─○ tails
-
-

In the first run, the coin immediately comes up heads and we stop. In the second run, the coin comes up tail three times before it finally comes up heads and we stop.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/recursion/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/retrying-failed-or-errored-steps/index.html b/walk-through/retrying-failed-or-errored-steps/index.html index 2bef113aaa5b..2a859c67865c 100644 --- a/walk-through/retrying-failed-or-errored-steps/index.html +++ b/walk-through/retrying-failed-or-errored-steps/index.html @@ -1,3944 +1,11 @@ - - - + - - - - - - - - - - - - Retrying Failed or Errored Steps - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Retrying Failed or Errored Steps - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Retrying Failed or Errored Steps

-

You can specify a retryStrategy that will dictate how failed or errored steps are retried:

-
# This example demonstrates the use of retry back offs
-apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: retry-backoff-
-spec:
-  entrypoint: retry-backoff
-  templates:
-  - name: retry-backoff
-    retryStrategy:
-      limit: 10
-      retryPolicy: "Always"
-      backoff:
-        duration: "1"      # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: "2m", "6h", "1d"
-        factor: 2
-        maxDuration: "1m"  # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: "2m", "6h", "1d"
-      affinity:
-        nodeAntiAffinity: {}
-    container:
-      image: python:alpine3.6
-      command: ["python", -c]
-      # fail with a 66% probability
-      args: ["import random; import sys; exit_code = random.choice([0, 1, 1]); sys.exit(exit_code)"]
-
-
    -
  • limit is the maximum number of times the container will be retried.
  • -
  • retryPolicy specifies if a container will be retried on failure, error, both, or only transient errors (e.g. i/o or TLS handshake timeout). "Always" retries on both errors and failures. Also available: OnFailure (default), "OnError", and "OnTransientError" (available after v3.0.0-rc2).
  • -
  • backoff is an exponential back-off
  • -
  • nodeAntiAffinity prevents running steps on the same host. Current implementation allows only empty nodeAntiAffinity (i.e. nodeAntiAffinity: {}) and by default it uses label kubernetes.io/hostname as the selector.
  • -
-

Providing an empty retryStrategy (i.e. retryStrategy: {}) will cause a container to retry until completion.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/retrying-failed-or-errored-steps/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/scripts-and-results/index.html b/walk-through/scripts-and-results/index.html index 9638d84bc42f..f7f1ad487cd6 100644 --- a/walk-through/scripts-and-results/index.html +++ b/walk-through/scripts-and-results/index.html @@ -1,3966 +1,11 @@ - - - + - - - - - - - - - - - - Scripts And Results - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Scripts And Results - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Scripts And Results

-

Often, we just want a template that executes a script specified as a here-script (also known as a here document) in the workflow spec. This example shows how to do that:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: scripts-bash-
-spec:
-  entrypoint: bash-script-example
-  templates:
-  - name: bash-script-example
-    steps:
-    - - name: generate
-        template: gen-random-int-bash
-    - - name: print
-        template: print-message
-        arguments:
-          parameters:
-          - name: message
-            value: "{{steps.generate.outputs.result}}"  # The result of the here-script
-
-  - name: gen-random-int-bash
-    script:
-      image: debian:9.4
-      command: [bash]
-      source: |                                         # Contents of the here-script
-        cat /dev/urandom | od -N2 -An -i | awk -v f=1 -v r=100 '{printf "%i\n", f + r * $1 / 65536}'
-
-  - name: gen-random-int-python
-    script:
-      image: python:alpine3.6
-      command: [python]
-      source: |
-        import random
-        i = random.randint(1, 100)
-        print(i)
-
-  - name: gen-random-int-javascript
-    script:
-      image: node:9.1-alpine
-      command: [node]
-      source: |
-        var rand = Math.floor(Math.random() * 100);
-        console.log(rand);
-
-  - name: print-message
-    inputs:
-      parameters:
-      - name: message
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo result was: {{inputs.parameters.message}}"]
-
-

The script keyword allows the specification of the script body using the source tag. This creates a temporary file containing the script body and then passes the name of the temporary file as the final parameter to command, which should be an interpreter that executes the script body.

-

The use of the script feature also assigns the standard output of running the script to a special output parameter named result. This allows you to use the result of running the script itself in the rest of the workflow spec. In this example, the result is simply echoed by the print-message template.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/scripts-and-results/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/secrets/index.html b/walk-through/secrets/index.html index eb9db8e6f4af..bb0ddf9cc97f 100644 --- a/walk-through/secrets/index.html +++ b/walk-through/secrets/index.html @@ -1,3948 +1,11 @@ - - - + - - - - - - - - - - - - Secrets - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Secrets - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Secrets

-

Argo supports the same secrets syntax and mechanisms as Kubernetes Pod specs, which allows access to secrets as environment variables or volume mounts. See the Kubernetes documentation for more information.

-
# To run this example, first create the secret by running:
-# kubectl create secret generic my-secret --from-literal=mypassword=S00perS3cretPa55word
-apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: secret-example-
-spec:
-  entrypoint: whalesay
-  # To access secrets as files, add a volume entry in spec.volumes[] and
-  # then in the container template spec, add a mount using volumeMounts.
-  volumes:
-  - name: my-secret-vol
-    secret:
-      secretName: my-secret     # name of an existing k8s secret
-  templates:
-  - name: whalesay
-    container:
-      image: alpine:3.7
-      command: [sh, -c]
-      args: ['
-        echo "secret from env: $MYSECRETPASSWORD";
-        echo "secret from file: `cat /secret/mountpath/mypassword`"
-      ']
-      # To access secrets as environment variables, use the k8s valueFrom and
-      # secretKeyRef constructs.
-      env:
-      - name: MYSECRETPASSWORD  # name of env var
-        valueFrom:
-          secretKeyRef:
-            name: my-secret     # name of an existing k8s secret
-            key: mypassword     # 'key' subcomponent of the secret
-      volumeMounts:
-      - name: my-secret-vol     # mount file containing secret at /secret/mountpath
-        mountPath: "/secret/mountpath"
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/secrets/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/sidecars/index.html b/walk-through/sidecars/index.html index afde8b609cda..54392371ee44 100644 --- a/walk-through/sidecars/index.html +++ b/walk-through/sidecars/index.html @@ -1,3933 +1,11 @@ - - - + - - - - - - - - - - - - Sidecars - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Sidecars - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Sidecars

-

A sidecar is another container that executes concurrently in the same pod as the main container and is useful in creating multi-container pods.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: sidecar-nginx-
-spec:
-  entrypoint: sidecar-nginx-example
-  templates:
-  - name: sidecar-nginx-example
-    container:
-      image: appropriate/curl
-      command: [sh, -c]
-      # Try to read from nginx web server until it comes up
-      args: ["until `curl -G 'http://127.0.0.1/' >& /tmp/out`; do echo sleep && sleep 1; done && cat /tmp/out"]
-    # Create a simple nginx web server
-    sidecars:
-    - name: nginx
-      image: nginx:1.13
-      command: [nginx, -g, daemon off;]
-
-

In the above example, we create a sidecar container that runs Nginx as a simple web server. The order in which containers come up is random, so in this example the main container polls the Nginx container until it is ready to service requests. This is a good design pattern when designing multi-container systems: always wait for any services you need to come up before running your main code.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/sidecars/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/steps/index.html b/walk-through/steps/index.html index a57d73addae9..be68733daafe 100644 --- a/walk-through/steps/index.html +++ b/walk-through/steps/index.html @@ -1,3962 +1,11 @@ - - - + - - - - - - - - - - - - Steps - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Steps - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Steps

-

In this example, we'll see how to create multi-step workflows, how to define more than one template in a workflow spec, and how to create nested workflows. Be sure to read the comments as they provide useful explanations.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: steps-
-spec:
-  entrypoint: hello-hello-hello
-
-  # This spec contains two templates: hello-hello-hello and whalesay
-  templates:
-  - name: hello-hello-hello
-    # Instead of just running a container
-    # This template has a sequence of steps
-    steps:
-    - - name: hello1            # hello1 is run before the following steps
-        template: whalesay
-        arguments:
-          parameters:
-          - name: message
-            value: "hello1"
-    - - name: hello2a           # double dash => run after previous step
-        template: whalesay
-        arguments:
-          parameters:
-          - name: message
-            value: "hello2a"
-      - name: hello2b           # single dash => run in parallel with previous step
-        template: whalesay
-        arguments:
-          parameters:
-          - name: message
-            value: "hello2b"
-
-  # This is the same template as from the previous example
-  - name: whalesay
-    inputs:
-      parameters:
-      - name: message
-    container:
-      image: docker/whalesay
-      command: [cowsay]
-      args: ["{{inputs.parameters.message}}"]
-
-

The above workflow spec prints three different flavors of "hello". The hello-hello-hello template consists of three steps. The first step named hello1 will be run in sequence whereas the next two steps named hello2a and hello2b will be run in parallel with each other. Using the argo CLI command, we can graphically display the execution history of this workflow spec, which shows that the steps named hello2a and hello2b ran in parallel with each other.

-
STEP            TEMPLATE           PODNAME                 DURATION  MESSAGE
-  steps-z2zdn  hello-hello-hello
- ├───✔ hello1   whalesay           steps-z2zdn-27420706    2s
- └─┬─✔ hello2a  whalesay           steps-z2zdn-2006760091  3s
-   └─✔ hello2b  whalesay           steps-z2zdn-2023537710  3s
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/steps/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/suspending/index.html b/walk-through/suspending/index.html index 483fb036fbcb..776d4168664e 100644 --- a/walk-through/suspending/index.html +++ b/walk-through/suspending/index.html @@ -1,3951 +1,11 @@ - - - + - - - - - - - - - - - - Suspending - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Suspending - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Suspending

-

Workflows can be suspended by

-
argo suspend WORKFLOW
-
-

Or by specifying a suspend step on the workflow:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: suspend-template-
-spec:
-  entrypoint: suspend
-  templates:
-  - name: suspend
-    steps:
-    - - name: build
-        template: whalesay
-    - - name: approve
-        template: approve
-    - - name: delay
-        template: delay
-    - - name: release
-        template: whalesay
-
-  - name: approve
-    suspend: {}
-
-  - name: delay
-    suspend:
-      duration: "20"    # Must be a string. Default unit is seconds. Could also be a Duration, e.g.: "2m", "6h"
-
-  - name: whalesay
-    container:
-      image: docker/whalesay
-      command: [cowsay]
-      args: ["hello world"]
-
-

Once suspended, a Workflow will not schedule any new steps until it is resumed. It can be resumed manually by

-
argo resume WORKFLOW
-
-

Or automatically with a duration limit as the example above.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/suspending/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/the-structure-of-workflow-specs/index.html b/walk-through/the-structure-of-workflow-specs/index.html index 8580596a1572..06bd721809f0 100644 --- a/walk-through/the-structure-of-workflow-specs/index.html +++ b/walk-through/the-structure-of-workflow-specs/index.html @@ -1,3937 +1,11 @@ - - - + - - - - - - - - - - - - The Structure of Workflow Specs - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + The Structure of Workflow Specs - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

The Structure of Workflow Specs

-

We now know enough about the basic components of a workflow spec. To review its basic structure:

-
    -
  • Kubernetes header including meta-data
  • -
  • -

    Spec body

    -
      -
    • Entrypoint invocation with optional arguments
    • -
    • List of template definitions
    • -
    -
  • -
  • -

    For each template definition

    -
      -
    • Name of the template
    • -
    • Optionally a list of inputs
    • -
    • Optionally a list of outputs
    • -
    • Container invocation (leaf template) or a list of steps
        -
      • For each step, a template invocation
      • -
      -
    • -
    -
  • -
-

To summarize, workflow specs are composed of a set of Argo templates where each template consists of an optional input section, an optional output section and either a container invocation or a list of steps where each step invokes another template.

-

Note that the container section of the workflow spec will accept the same options as the container section of a pod spec, including but not limited to environment variables, secrets, and volume mounts. Similarly, for volume claims and volumes.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/the-structure-of-workflow-specs/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/timeouts/index.html b/walk-through/timeouts/index.html index 9cf43218f65d..e7e307ae2405 100644 --- a/walk-through/timeouts/index.html +++ b/walk-through/timeouts/index.html @@ -1,3942 +1,11 @@ - - - + - - - - - - - - - - - - Timeouts - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Timeouts - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Timeouts

-

You can use the field activeDeadlineSeconds to limit the elapsed time for a workflow:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: timeouts-
-spec:
-  activeDeadlineSeconds: 10 # terminate workflow after 10 seconds
-  entrypoint: sleep
-  templates:
-  - name: sleep
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo sleeping for 1m; sleep 60; echo done"]
-
-

You can limit the elapsed time for a specific template as well:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: timeouts-
-spec:
-  entrypoint: sleep
-  templates:
-  - name: sleep
-    activeDeadlineSeconds: 10 # terminate container template after 10 seconds
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo sleeping for 1m; sleep 60; echo done"]
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/timeouts/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/walk-through/volumes/index.html b/walk-through/volumes/index.html index 41e45ad3103f..53ea6245f271 100644 --- a/walk-through/volumes/index.html +++ b/walk-through/volumes/index.html @@ -1,4095 +1,11 @@ - - - + - - - - - - - - - - - - Volumes - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Volumes - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Volumes

-

The following example dynamically creates a volume and then uses the volume in a two step workflow.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: volumes-pvc-
-spec:
-  entrypoint: volumes-pvc-example
-  volumeClaimTemplates:                 # define volume, same syntax as k8s Pod spec
-  - metadata:
-      name: workdir                     # name of volume claim
-    spec:
-      accessModes: [ "ReadWriteOnce" ]
-      resources:
-        requests:
-          storage: 1Gi                  # Gi => 1024 * 1024 * 1024
-
-  templates:
-  - name: volumes-pvc-example
-    steps:
-    - - name: generate
-        template: whalesay
-    - - name: print
-        template: print-message
-
-  - name: whalesay
-    container:
-      image: docker/whalesay:latest
-      command: [sh, -c]
-      args: ["echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt"]
-      # Mount workdir volume at /mnt/vol before invoking docker/whalesay
-      volumeMounts:                     # same syntax as k8s Pod spec
-      - name: workdir
-        mountPath: /mnt/vol
-
-  - name: print-message
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
-      # Mount workdir volume at /mnt/vol before invoking docker/whalesay
-      volumeMounts:                     # same syntax as k8s Pod spec
-      - name: workdir
-        mountPath: /mnt/vol
-
-

Volumes are a very useful way to move large amounts of data from one step in a workflow to another. Depending on the system, some volumes may be accessible concurrently from multiple steps.

-

In some cases, you want to access an already existing volume rather than creating/destroying one dynamically.

-
# Define Kubernetes PVC
-kind: PersistentVolumeClaim
-apiVersion: v1
-metadata:
-  name: my-existing-volume
-spec:
-  accessModes: [ "ReadWriteOnce" ]
-  resources:
-    requests:
-      storage: 1Gi
-
----
-apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: volumes-existing-
-spec:
-  entrypoint: volumes-existing-example
-  volumes:
-  # Pass my-existing-volume as an argument to the volumes-existing-example template
-  # Same syntax as k8s Pod spec
-  - name: workdir
-    persistentVolumeClaim:
-      claimName: my-existing-volume
-
-  templates:
-  - name: volumes-existing-example
-    steps:
-    - - name: generate
-        template: whalesay
-    - - name: print
-        template: print-message
-
-  - name: whalesay
-    container:
-      image: docker/whalesay:latest
-      command: [sh, -c]
-      args: ["echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt"]
-      volumeMounts:
-      - name: workdir
-        mountPath: /mnt/vol
-
-  - name: print-message
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
-      volumeMounts:
-      - name: workdir
-        mountPath: /mnt/vol
-
-

It's also possible to declare existing volumes at the template level, instead of the workflow level. -Workflows can generate volumes using a resource step.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: template-level-volume-
-spec:
-  entrypoint: generate-and-use-volume
-  templates:
-  - name: generate-and-use-volume
-    steps:
-    - - name: generate-volume
-        template: generate-volume
-        arguments:
-          parameters:
-            - name: pvc-size
-              # In a real-world example, this could be generated by a previous workflow step.
-              value: '1Gi'
-    - - name: generate
-        template: whalesay
-        arguments:
-          parameters:
-            - name: pvc-name
-              value: '{{steps.generate-volume.outputs.parameters.pvc-name}}'
-    - - name: print
-        template: print-message
-        arguments:
-          parameters:
-            - name: pvc-name
-              value: '{{steps.generate-volume.outputs.parameters.pvc-name}}'
-
-  - name: generate-volume
-    inputs:
-      parameters:
-        - name: pvc-size
-    resource:
-      action: create
-      setOwnerReference: true
-      manifest: |
-        apiVersion: v1
-        kind: PersistentVolumeClaim
-        metadata:
-          generateName: pvc-example-
-        spec:
-          accessModes: ['ReadWriteOnce', 'ReadOnlyMany']
-          resources:
-            requests:
-              storage: '{{inputs.parameters.pvc-size}}'
-    outputs:
-      parameters:
-        - name: pvc-name
-          valueFrom:
-            jsonPath: '{.metadata.name}'
-
-  - name: whalesay
-    inputs:
-      parameters:
-        - name: pvc-name
-    volumes:
-      - name: workdir
-        persistentVolumeClaim:
-          claimName: '{{inputs.parameters.pvc-name}}'
-    container:
-      image: docker/whalesay:latest
-      command: [sh, -c]
-      args: ["echo generating message in volume; cowsay hello world | tee /mnt/vol/hello_world.txt"]
-      volumeMounts:
-      - name: workdir
-        mountPath: /mnt/vol
-
-  - name: print-message
-    inputs:
-        parameters:
-          - name: pvc-name
-    volumes:
-      - name: workdir
-        persistentVolumeClaim:
-          claimName: '{{inputs.parameters.pvc-name}}'
-    container:
-      image: alpine:latest
-      command: [sh, -c]
-      args: ["echo getting message from volume; find /mnt/vol; cat /mnt/vol/hello_world.txt"]
-      volumeMounts:
-      - name: workdir
-        mountPath: /mnt/vol
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/walk-through/volumes/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/webhooks/index.html b/webhooks/index.html index 9078184f9226..831974de5128 100644 --- a/webhooks/index.html +++ b/webhooks/index.html @@ -1,3931 +1,11 @@ - - - + - - - - - - - - - - - - Webhooks - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Webhooks - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Webhooks

-
-

v2.11 and after

-
-

Many clients can send events via the events API endpoint using a standard authorization header. However, for clients that are unable to do so (e.g. because they use signature verification as proof of origin), additional configuration is required.

-

In the namespace that will receive the event, create access token resources for your client:

-
    -
  • A role with permissions to get workflow templates and to create a workflow: example
  • -
  • A service account for the client: example.
  • -
  • A binding of the account to the role: example
  • -
-

Additionally create:

-
    -
  • A secret named argo-workflows-webhook-clients listing the service accounts: example
  • -
-

The secret argo-workflows-webhook-clients tells Argo:

-
    -
  • What type of webhook the account can be used for, e.g. github.
  • -
  • What "secret" that webhook is configured for, e.g. in your Github settings page.
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/webhooks/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/widgets/index.html b/widgets/index.html index a468a50e3185..a6c739bf7c70 100644 --- a/widgets/index.html +++ b/widgets/index.html @@ -1,3920 +1,11 @@ - - - + - - - - - - - - - - - - Widgets - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Widgets - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Widgets

-
-

v3.0 and after

-
-

Widgets are intended to be embedded into other applications using inline frames (iframe). This may not work with your configuration. You may need to:

-
    -
  • Run the Argo Server with an account that can read workflows. That can be done using --auth-mode=server and configuring the argo-server service account.
  • -
  • Run the Argo Server with --x-frame-options=SAMEORIGIN or --x-frame-options=.
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/widgets/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/windows/index.html b/windows/index.html index d56651bb9f0b..9b41c0f452e1 100644 --- a/windows/index.html +++ b/windows/index.html @@ -1,4123 +1,11 @@ - - - + - - - - - - - - - - - - Windows Container Support - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Windows Container Support - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Windows Container Support

-

The Argo server and the workflow controller currently only run on Linux. The workflow executor however also runs on Windows nodes, meaning you can use Windows containers inside your workflows! Here are the steps to get started.

-

Requirements

-
    -
  • Kubernetes 1.14 or later, supporting Windows nodes
  • -
  • Hybrid cluster containing Linux and Windows nodes like described in the Kubernetes docs
  • -
  • Argo configured and running like described here
  • -
-

Schedule workflows with Windows containers

-

If you're running workflows in your hybrid Kubernetes cluster, always make sure to include a nodeSelector to run the steps on the correct host OS:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: hello-windows-
-spec:
-  entrypoint: hello-win
-  templates:
-    - name: hello-win
-      nodeSelector:
-        kubernetes.io/os: windows    # specify the OS your step should run on
-      container:
-        image: mcr.microsoft.com/windows/nanoserver:1809
-        command: ["cmd", "/c"]
-        args: ["echo", "Hello from Windows Container!"]
-
-

You can run this example and get the logs:

-
$ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-windows.yaml
-$ argo logs hello-windows-s9kk5
-hello-windows-s9kk5: "Hello from Windows Container!"
-
-

Schedule hybrid workflows

-

You can also run different steps on different host operating systems. This can for example be very helpful when you need to compile your application on Windows and Linux.

-

An example workflow can look like the following:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: hello-hybrid-
-spec:
-  entrypoint: mytemplate
-  templates:
-    - name: mytemplate
-      steps:
-        - - name: step1
-            template: hello-win
-        - - name: step2
-            template: hello-linux
-
-    - name: hello-win
-      nodeSelector:
-        kubernetes.io/os: windows
-      container:
-        image: mcr.microsoft.com/windows/nanoserver:1809
-        command: ["cmd", "/c"]
-        args: ["echo", "Hello from Windows Container!"]
-    - name: hello-linux
-      nodeSelector:
-        kubernetes.io/os: linux
-      container:
-        image: alpine
-        command: [echo]
-        args: ["Hello from Linux Container!"]
-
-

Again, you can run this example and get the logs:

-
$ argo submit --watch https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/hello-hybrid.yaml
-$ argo logs hello-hybrid-plqpp
-hello-hybrid-plqpp-1977432187: "Hello from Windows Container!"
-hello-hybrid-plqpp-764774907: Hello from Linux Container!
-
-

Artifact mount path

-

Artifacts work mostly the same way as on Linux. All paths get automatically mapped to the C: drive. For example:

-
 # ...
-    - name: print-message
-      inputs:
-        artifacts:
-          # unpack the message input artifact
-          # and put it at C:\message
-          - name: message
-            path: "/message" # gets mapped to C:\message
-      nodeSelector:
-        kubernetes.io/os: windows
-      container:
-        image: mcr.microsoft.com/windows/nanoserver:1809
-        command: ["cmd", "/c"]
-        args: ["dir C:\\message"]   # List the C:\message directory
-
-

Remember that volume mounts on Windows can only target a directory in the container, and not an individual file.

-

Limitations

-
    -
  • Sharing process namespaces doesn't work on Windows so you can't use the Process Namespace Sharing (PNS) workflow executor.
  • -
  • The executor Windows container is built using Nano Server as the base image. Running a newer windows version (e.g. 1909) is currently not confirmed to be working. If this is required, you need to build the executor container yourself by first adjusting the base image.
  • -
-

Building the workflow executor image for Windows

-

To build the workflow executor image for Windows you need a Windows machine running Windows Server 2019 with Docker installed like described in the docs.

-

You then clone the project and run the Docker build with the Dockerfile for Windows and argoexec as a target:

-
git clone https://github.com/argoproj/argo-workflows.git
-cd argo
-docker build -t myargoexec -f .\Dockerfile.windows --target argoexec .
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/windows/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/work-avoidance/index.html b/work-avoidance/index.html index 1f6e18cd7d15..a5911368bca7 100644 --- a/work-avoidance/index.html +++ b/work-avoidance/index.html @@ -1,3942 +1,11 @@ - - - + - - - - - - - - - - - - Work Avoidance - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Work Avoidance - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Work Avoidance

-
-

v2.9 and after

-
-

You can make workflows faster and more robust by employing work avoidance. A workflow that utilizes this is simply a workflow containing steps that do not run if the work has already been done.

-

This is a technique is similar to memoization. Work avoidance is totally in your control and you make the decisions as to have to skip the work. Memoization is a feature of Argo Workflows to automatically skip steps which generate outputs. Prior to version 3.5 this required outputs to be specified, but you can use memoization for all steps and tasks in version 3.5 or later.

-

This simplest way to do this is to use marker files.

-

Use cases:

-
    -
  • An expensive step appears across multiple workflows - you want to avoid repeating them.
  • -
  • A workflow has unreliable tasks - you want to be able to resubmit the workflow.
  • -
-

A marker file is a file that indicates the work has already been done. Before doing the work you check to see if the marker has already been done:

-
if [ -e /work/markers/name-of-task ]; then
-    echo "work already done"
-    exit 0
-fi
-echo "working very hard"
-touch /work/markers/name-of-task
-
-

Choose a name for the file that is unique for the task, e.g. the template name and all the parameters:

-
touch /work/markers/$(date +%Y-%m-%d)-echo-{{inputs.parameters.num}}
-
-

You need to store the marker files between workflows and this can be achieved using a PVC and optional input artifact.

-

This complete work avoidance example has the following:

-
    -
  • A PVC to store the markers on.
  • -
  • A load-markers step that loads the marker files from artifact storage.
  • -
  • Multiple echo tasks that avoid work using marker files.
  • -
  • A save-markers exit handler to save the marker files, even if they are not needed.
  • -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/work-avoidance/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-archive/index.html b/workflow-archive/index.html index 2a1f00fe957f..2e0a7eef0011 100644 --- a/workflow-archive/index.html +++ b/workflow-archive/index.html @@ -1,4134 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Archive - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Archive - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Archive

-
-

v2.5 and after

-
-

If you want to keep completed workflows for a long time, you can use the workflow archive to save them in a Postgres or MySQL (>= 5.7.8) database. -The workflow archive stores the status of the workflow, which pods have been executed, what was the result etc. -The job logs of the workflow pods will not be archived. -If you need to save the logs of the pods, you must setup an artifact repository according to this doc.

-

The quick-start deployment includes a Postgres database server. -In this case the workflow archive is already enabled. -Such a deployment is convenient for test environments, but in a production environment you must use a production quality database service.

-

Enabling Workflow Archive

-

To enable archiving of the workflows, you must configure database parameters in the persistence section of your configuration and set archive: to true.

-

Example:

-
persistence: 
-  archive: true
-  postgresql:
-    host: localhost
-    port: 5432
-    database: postgres
-    tableName: argo_workflows
-    userNameSecret:
-      name: argo-postgres-config
-      key: username
-    passwordSecret:
-      name: argo-postgres-config
-      key: password
-
- -

You must also create the secret with database user and password in the namespace of the workflow controller.

-

Example:

-
kubectl create secret generic argo-postgres-config -n argo --from-literal=password=mypassword --from-literal=username=argodbuser
-
- -

Note that IAM-based authentication is not currently supported. However, you can start your database proxy as a sidecar -(e.g. via CloudSQL Proxy on GCP) and then specify your local -proxy address, IAM username, and an empty string as your password in the persistence configuration to connect to it.

-

The following tables will be created in the database when you start the workflow controller with enabled archive:

-
    -
  • argo_workflows
  • -
  • argo_archived_workflows
  • -
  • argo_archived_workflows_labels
  • -
  • schema_history
  • -
-

Automatic Database Migration

-

Every time the Argo workflow-controller starts with persistence enabled, it tries to migrate the database to the correct version. -If the database migration fails, the workflow-controller will also fail to start. -In this case you can delete all the above tables and restart the workflow-controller.

-

If you know what are you doing you also have an option to skip migration:

-
persistence: 
-  skipMigration: true
-
- -

Required database permissions

-

Postgres

-

The database user/role must have CREATE and USAGE permissions on the public schema of the database so that the tables can be created during the migration.

-

Archive TTL

-

You can configure the time period to keep archived workflows before they will be deleted by the archived workflow garbage collection function. -The default is forever.

-

Example:

-
persistence: 
-  archiveTTL: 10d
-
- -

The ARCHIVED_WORKFLOW_GC_PERIOD variable defines the periodicity of running the garbage collection function. -The default value is documented here. -When the workflow controller starts, it sets the ticker to run every ARCHIVED_WORKFLOW_GC_PERIOD. -It does not run the garbage collection function immediately and the first garbage collection happens only after the period defined in the ARCHIVED_WORKFLOW_GC_PERIOD variable.

-

Cluster Name

-

Optionally you can set a unique name of your Kubernetes cluster. This name will populate the clustername field in the argo_archived_workflows table.

-

Example:

-
persistence: 
-  clusterName: dev-cluster
-
- -

Disabling Workflow Archive

-

To disable archiving of the workflows, set archive: to false in the persistence section of your configuration.

-

Example:

-
persistence: 
-  archive: false
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-archive/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-concepts/index.html b/workflow-concepts/index.html index d76b1fa86513..3192530c5d4b 100644 --- a/workflow-concepts/index.html +++ b/workflow-concepts/index.html @@ -1,4265 +1,11 @@ - - - + - - - - - - - - - - - - Core Concepts - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Core Concepts - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Core Concepts

-

This page serves as an introduction to the core concepts of Argo.

-

The Workflow

-

The Workflow is the most important resource in Argo and serves two important functions:

-
    -
  1. It defines the workflow to be executed.
  2. -
  3. It stores the state of the workflow.
  4. -
-

Because of these dual responsibilities, a Workflow should be treated as a "live" object. It is not only a static definition, but is also an "instance" of said definition. (If it isn't clear what this means, it will be explained below).

-

Workflow Spec

-

The workflow to be executed is defined in the Workflow.spec field. The core structure of a Workflow spec is a list of templates and an entrypoint.

-

templates can be loosely thought of as "functions": they define instructions to be executed. -The entrypoint field defines what the "main" function will be – that is, the template that will be executed first.

-

Here is an example of a simple Workflow spec with a single template:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: hello-world-  # Name of this Workflow
-spec:
-  entrypoint: whalesay        # Defines "whalesay" as the "main" template
-  templates:
-  - name: whalesay            # Defining the "whalesay" template
-    container:
-      image: docker/whalesay
-      command: [cowsay]
-      args: ["hello world"]   # This template runs "cowsay" in the "whalesay" image with arguments "hello world"
-
-

template Types

-

There are 6 types of templates, divided into two different categories.

-

Template Definitions

-

These templates define work to be done, usually in a Container.

-
Container
-

Perhaps the most common template type, it will schedule a Container. The spec of the template is the same as the Kubernetes container spec, so you can define a container here the same way you do anywhere else in Kubernetes.

-

Example:

-
  - name: whalesay
-    container:
-      image: docker/whalesay
-      command: [cowsay]
-      args: ["hello world"]
-
-
Script
-

A convenience wrapper around a container. The spec is the same as a container, but adds the source: field which allows you to define a script in-place. -The script will be saved into a file and executed for you. The result of the script is automatically exported into an Argo variable either {{tasks.<NAME>.outputs.result}} or {{steps.<NAME>.outputs.result}}, depending how it was called.

-

Example:

-
  - name: gen-random-int
-    script:
-      image: python:alpine3.6
-      command: [python]
-      source: |
-        import random
-        i = random.randint(1, 100)
-        print(i)
-
-
Resource
-

Performs operations on cluster Resources directly. It can be used to get, create, apply, delete, replace, or patch resources on your cluster.

-

This example creates a ConfigMap resource on the cluster:

-
  - name: k8s-owner-reference
-    resource:
-      action: create
-      manifest: |
-        apiVersion: v1
-        kind: ConfigMap
-        metadata:
-          generateName: owned-eg-
-        data:
-          some: value
-
-
Suspend
-

A suspend template will suspend execution, either for a duration or until it is resumed manually. Suspend templates can be resumed from the CLI (with argo resume), the API endpoint, or the UI.

-

Example:

-
  - name: delay
-    suspend:
-      duration: "20s"
-
-

Template Invocators

-

These templates are used to invoke/call other templates and provide execution control.

-
Steps
-

A steps template allows you to define your tasks in a series of steps. The structure of the template is a "list of lists". Outer lists will run sequentially and inner lists will run in parallel. If you want to run inner lists one by one, use the Synchronization feature. You can set a wide array of options to control execution, such as when: clauses to conditionally execute a step.

-

In this example step1 runs first. Once it is completed, step2a and step2b will run in parallel:

-
  - name: hello-hello-hello
-    steps:
-    - - name: step1
-        template: prepare-data
-    - - name: step2a
-        template: run-data-first-half
-      - name: step2b
-        template: run-data-second-half
-
-
DAG
-

A dag template allows you to define your tasks as a graph of dependencies. In a DAG, you list all your tasks and set which other tasks must complete before a particular task can begin. Tasks without any dependencies will be run immediately.

-

In this example A runs first. Once it is completed, B and C will run in parallel and once they both complete, D will run:

-
  - name: diamond
-    dag:
-      tasks:
-      - name: A
-        template: echo
-      - name: B
-        dependencies: [A]
-        template: echo
-      - name: C
-        dependencies: [A]
-        template: echo
-      - name: D
-        dependencies: [B, C]
-        template: echo
-
-

Architecture

-

If you are interested in Argo's underlying architecture, see Architecture.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-concepts/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-controller-configmap/index.html b/workflow-controller-configmap/index.html index 410aef31e88d..964853216e72 100644 --- a/workflow-controller-configmap/index.html +++ b/workflow-controller-configmap/index.html @@ -1,4022 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Controller Config Map - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Controller Config Map - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Controller Config Map

-

Introduction

-

The Workflow Controller Config Map is used to set controller-wide settings.

-

For a detailed example, please see workflow-controller-configmap.yaml.

-

Alternate Structure

-

In all versions, the configuration may be under a config: | key:

-
# This file describes the config settings available in the workflow controller configmap
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:
-  config: |
-    instanceID: my-ci-controller
-    artifactRepository:
-      archiveLogs: true
-      s3:
-        endpoint: s3.amazonaws.com
-        bucket: my-bucket
-        region: us-west-2
-        insecure: false
-        accessKeySecret:
-          name: my-s3-credentials
-          key: accessKey
-        secretKeySecret:
-          name: my-s3-credentials
-          key: secretKey
-
-

In version 2.7+, the config: | key is optional. However, if the config: | key is not used, all nested maps under top level -keys should be strings. This makes it easier to generate the map with some configuration management tools like Kustomize.

-
# This file describes the config settings available in the workflow controller configmap
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:                      # "config: |" key is optional in 2.7+!
-  instanceID: my-ci-controller
-  artifactRepository: |    # However, all nested maps must be strings
-   archiveLogs: true
-   s3:
-     endpoint: s3.amazonaws.com
-     bucket: my-bucket
-     region: us-west-2
-     insecure: false
-     accessKeySecret:
-       name: my-s3-credentials
-       key: accessKey
-     secretKeySecret:
-       name: my-s3-credentials
-       key: secretKey
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-controller-configmap/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-creator/index.html b/workflow-creator/index.html index 1e5ce04c0966..4dffc9093855 100644 --- a/workflow-creator/index.html +++ b/workflow-creator/index.html @@ -1,3930 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Creator - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Creator - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Creator

-
-

v2.9 and after

-
-

If you create your workflow via the CLI or UI, an attempt will be made to label it with the user who created it

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  name: my-wf
-  labels:
-    workflows.argoproj.io/creator: admin
-    # labels must be DNS formatted, so the "@" is replaces by '.at.'  
-    workflows.argoproj.io/creator-email: admin.at.your.org
-    workflows.argoproj.io/creator-preferred-username: admin-preferred-username
-
-
-

Note

-

Labels only contain [-_.0-9a-zA-Z], so any other characters will be turned into -.

-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-creator/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-events/index.html b/workflow-events/index.html index d4f34cc17b89..3a47b5c2bffd 100644 --- a/workflow-events/index.html +++ b/workflow-events/index.html @@ -1,3952 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Events - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Events - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Events

-
-

v2.7.2

-
-

⚠️ Do not use Kubernetes events for automation. Events maybe lost or rolled-up.

-

We emit Kubernetes events on certain events.

-

Workflow state change:

-
    -
  • WorkflowRunning
  • -
  • WorkflowSucceeded
  • -
  • WorkflowFailed
  • -
  • WorkflowTimedOut
  • -
-

Node state change:

-
    -
  • WorkflowNodeRunning
  • -
  • WorkflowNodeSucceeded
  • -
  • WorkflowNodeFailed
  • -
  • WorkflowNodeError
  • -
-

The involved object is the workflow in both cases. Additionally, for node state change events, annotations indicate the name and type of the involved node:

-
metadata:
-  name: my-wf.160434cb3af841f8
-  namespace: my-ns
-  annotations:
-    workflows.argoproj.io/node-name: my-node
-    workflows.argoproj.io/node-type: Pod
-type: Normal
-reason: WorkflowNodeSucceeded
-message: 'Succeeded node my-node: my message'
-involvedObject:
-  apiVersion: v1alpha1
-  kind: Workflow
-  name: my-wf
-  namespace: my-ns
-  resourceVersion: "1234"
-  uid: my-uid
-firstTimestamp: "2020-04-09T16:50:16Z"
-lastTimestamp: "2020-04-09T16:50:16Z"
-count: 1
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-events/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-executors/index.html b/workflow-executors/index.html index 377a63894d83..85ed00e38d7f 100644 --- a/workflow-executors/index.html +++ b/workflow-executors/index.html @@ -1,4207 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Executors - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Executors - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Executors

-

A workflow executor is a process that conforms to a specific interface that allows Argo to perform certain actions like monitoring pod logs, collecting artifacts, managing container life-cycles, etc.

-

The executor to be used in your workflows can be changed in the config map under the containerRuntimeExecutor key (removed in v3.4).

-

Emissary (emissary)

-
-

v3.1 and after

-
-

Default in >= v3.3.

-

This is the most fully featured executor.

-
    -
  • Reliability:
      -
    • Works on GKE Autopilot
    • -
    • Does not require init process to kill sub-processes.
    • -
    -
  • -
  • More secure:
      -
    • No privileged access
    • -
    • Cannot escape the privileges of the pod's service account
    • -
    • Can runAsNonRoot.
    • -
    -
  • -
  • Scalable:
      -
    • It reads and writes to and from the container's disk and typically does not use any network APIs unless resource -type template is used.
    • -
    -
  • -
  • Artifacts:
      -
    • Output artifacts can be located on the base layer (e.g. /tmp).
    • -
    -
  • -
  • Configuration:
      -
    • command should be specified for containers.
    • -
    -
  • -
-

You can determine values as follows:

-
docker image inspect -f '{{.Config.Entrypoint}} {{.Config.Cmd}}' argoproj/argosay:v2
-
-

Learn more about command and args

-

Image Index/Cache

-

If you don't provide command to run, the emissary will grab it from container image. You can also specify it using the workflow spec or emissary will look it up in the image index. This is nothing more fancy than -a configuration item.

-

Emissary will create a cache entry, using image with version as key and command as value, and it will reuse it for specific image/version.

-

Exit Code 64

-

The emissary will exit with code 64 if it fails. This may indicate a bug in the emissary.

-

Docker (docker)

-

⚠️Deprecated. Removed in v3.4.

-

Default in <= v3.2.

-
    -
  • Least secure:
      -
    • It requires privileged access to docker.sock of the host to be mounted which. Often rejected by Open Policy Agent (OPA) or your Pod Security Policy (PSP).
    • -
    • It can escape the privileges of the pod's service account
    • -
    • It cannot runAsNonRoot.
    • -
    -
  • -
  • Equal most scalable:
      -
    • It communicates directly with the local Docker daemon.
    • -
    -
  • -
  • Artifacts:
      -
    • Output artifacts can be located on the base layer (e.g. /tmp).
    • -
    -
  • -
  • Configuration:
      -
    • No additional configuration needed.
    • -
    -
  • -
-

Note: when using docker as workflow executors, messages printed in both stdout and stderr are captured in the Argo variable .outputs.result.

-

Kubelet (kubelet)

-

⚠️Deprecated. Removed in v3.4.

-
    -
  • Secure
      -
    • No privileged access
    • -
    • Cannot escape the privileges of the pod's service account
    • -
    • runAsNonRoot - TBD, see #4186
    • -
    -
  • -
  • Scalable:
      -
    • Operations performed against the local Kubelet
    • -
    -
  • -
  • Artifacts:
      -
    • Output artifacts must be saved on volumes (e.g. empty-dir) and not the base image layer (e.g. /tmp)
    • -
    -
  • -
  • Step/Task result:
      -
    • Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result. May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result
    • -
    -
  • -
  • Configuration:
      -
    • Additional Kubelet configuration maybe needed
    • -
    -
  • -
-

Kubernetes API (k8sapi)

-

⚠️Deprecated. Removed in v3.4.

-
    -
  • Reliability:
      -
    • Works on GKE Autopilot
    • -
    -
  • -
  • Most secure:
      -
    • No privileged access
    • -
    • Cannot escape the privileges of the pod's service account
    • -
    • Can runAsNonRoot
    • -
    -
  • -
  • Least scalable:
      -
    • Log retrieval and container operations performed against the remote Kubernetes API
    • -
    -
  • -
  • Artifacts:
      -
    • Output artifacts must be saved on volumes (e.g. empty-dir) and not the base image layer (e.g. /tmp)
    • -
    -
  • -
  • Step/Task result:
      -
    • Warnings that normally goes to stderr will get captured in a step or a dag task's outputs.result. May require changes if your pipeline is conditioned on steps/tasks.name.outputs.result
    • -
    -
  • -
  • Configuration:
      -
    • No additional configuration needed.
    • -
    -
  • -
-

Process Namespace Sharing (pns)

-

⚠️Deprecated. Removed in v3.4.

-
    -
  • More secure:
      -
    • No privileged access
    • -
    • cannot escape the privileges of the pod's service account
    • -
    • Can runAsNonRoot, if you use volumes (e.g. empty-dir) for your output artifacts
    • -
    • Processes are visible to other containers in the pod. This includes all information visible in /proc, such as passwords that were passed as arguments or environment variables. These are protected only by regular Unix permissions.
    • -
    -
  • -
  • Scalable:
      -
    • Most operations use local procfs.
    • -
    • Log retrieval uses the remote Kubernetes API
    • -
    -
  • -
  • Artifacts:
      -
    • Output artifacts can be located on the base layer (e.g. /tmp)
    • -
    • Cannot capture artifacts from a base layer which has a volume mounted under it
    • -
    • Cannot capture artifacts from base layer if the container is short-lived.
    • -
    -
  • -
  • Configuration:
      -
    • No additional configuration needed.
    • -
    -
  • -
  • Process will no longer run with PID 1
  • -
  • Doesn't work for Windows containers.
  • -
-

Learn more

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-executors/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-inputs/index.html b/workflow-inputs/index.html index 3b9cca6a1924..b99ac0b39d9b 100644 --- a/workflow-inputs/index.html +++ b/workflow-inputs/index.html @@ -1,4088 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Inputs - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Inputs - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Inputs

-

Introduction

-

Workflows and templates operate on a set of defined parameters and arguments that are supplied to the running container. The precise details of how to manage the inputs can be confusing; this article attempts to clarify concepts and provide simple working examples to illustrate the various configuration options.

-

The examples below are limited to DAGTemplates and mainly focused on parameters, but similar reasoning applies to the other types of templates.

-

Parameter Inputs

-

First, some clarification of terms is needed. For a glossary reference, see Argo Core Concepts.

-

A workflow provides arguments, which are passed in to the entry point template. A template defines inputs which are then provided by template callers (such as steps, dag, or even a workflow). The structure of both is identical.

-

For example, in a Workflow, one parameter would look like this:

-
arguments:
-  parameters:
-  - name: workflow-param-1
-
-

And in a template:

-
inputs:
-  parameters:
-  - name: template-param-1
-
-

Inputs to DAGTemplates use the arguments format:

-
dag:
-  tasks:
-  - name: step-A
-    template: step-template-a
-    arguments:
-      parameters:
-      - name: template-param-1
-        value: abcd
-
-

Previous examples in context:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: example-
-spec:
-  entrypoint: main
-  arguments:
-    parameters:
-    - name: workflow-param-1
-  templates:
-  - name: main
-    dag:
-      tasks:
-      - name: step-A 
-        template: step-template-a
-        arguments:
-          parameters:
-          - name: template-param-1
-            value: "{{workflow.parameters.workflow-param-1}}"
-
-  - name: step-template-a
-    inputs:
-      parameters:
-        - name: template-param-1
-    script:
-      image: alpine
-      command: [/bin/sh]
-      source: |
-          echo "{{inputs.parameters.template-param-1}}"
-
-

To run this example: argo submit -n argo example.yaml -p 'workflow-param-1="abcd"' --watch

-

Using Previous Step Outputs As Inputs

-

In DAGTemplates, it is common to want to take the output of one step and send it as the input to another step. However, there is a difference in how this works for artifacts vs parameters. Suppose our step-template-a defines some outputs:

-
outputs:
-  parameters:
-    - name: output-param-1
-      valueFrom:
-        path: /p1.txt
-  artifacts:
-    - name: output-artifact-1
-      path: /some-directory
-
-

In my DAGTemplate, I can send these outputs to another template like this:

-
dag:
-  tasks:
-  - name: step-A 
-    template: step-template-a
-    arguments:
-      parameters:
-      - name: template-param-1
-        value: "{{workflow.parameters.workflow-param-1}}"
-  - name: step-B
-    dependencies: [step-A]
-    template: step-template-b
-    arguments:
-      parameters:
-      - name: template-param-2
-        value: "{{tasks.step-A.outputs.parameters.output-param-1}}"
-      artifacts:
-      - name: input-artifact-1
-        from: "{{tasks.step-A.outputs.artifacts.output-artifact-1}}"
-
-

Note the important distinction between parameters and artifacts; they both share the name field, but one uses value and the other uses from.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-inputs/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-notifications/index.html b/workflow-notifications/index.html index 3bb7841633d8..245a62945700 100644 --- a/workflow-notifications/index.html +++ b/workflow-notifications/index.html @@ -1,3924 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Notifications - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Notifications - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Notifications

-

There are a number of use cases where you may wish to notify an external system when a workflow completes:

-
    -
  1. Send an email.
  2. -
  3. Send a Slack (or other instant message).
  4. -
  5. Send a message to Kafka (or other message bus).
  6. -
-

You have options:

-
    -
  1. For individual workflows, can add an exit handler to your workflow, such as in this example.
  2. -
  3. If you want the same for every workflow, you can add an exit handler to the default workflow spec.
  4. -
  5. Use a service (e.g. Heptio Labs EventRouter) to the Workflow events we emit.
  6. -
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-notifications/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-of-workflows/index.html b/workflow-of-workflows/index.html index b825be22e652..00638320605b 100644 --- a/workflow-of-workflows/index.html +++ b/workflow-of-workflows/index.html @@ -1,4071 +1,11 @@ - - - + - - - - - - - - - - - - Workflow of Workflows - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow of Workflows - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow of Workflows

-
-

v2.9 and after

-
-

Introduction

-

The Workflow of Workflows pattern involves a parent workflow triggering one or more child workflows, managing them, and acting on their results.

-

Examples

-

You can use workflowTemplateRef to trigger a workflow inline.

-
    -
  1. Define your workflow as a workflowtemplate.
  2. -
-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: workflow-template-submittable
-spec:
-  entrypoint: whalesay-template
-  arguments:
-    parameters:
-      - name: message
-        value: hello world
-  templates:
-    - name: whalesay-template
-      inputs:
-        parameters:
-          - name: message
-      container:
-        image: docker/whalesay
-        command: [cowsay]
-        args: ["{{inputs.parameters.message}}"]
-
-
    -
  1. Create the Workflowtemplate in cluster using argo template create <yaml>
  2. -
  3. Define the workflow of workflows.
  4. -
-
# This template demonstrates a workflow of workflows.
-# Workflow triggers one or more workflows and manages them.
-apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: workflow-of-workflows-
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      steps:
-        - - name: workflow1
-            template: resource-without-argument
-            arguments:
-              parameters:
-              - name: workflowtemplate
-                value: "workflow-template-submittable"
-        - - name: workflow2
-            template: resource-with-argument
-            arguments:
-              parameters:
-              - name: workflowtemplate
-                value: "workflow-template-submittable"
-              - name: message
-                value: "Welcome Argo"
-
-    - name: resource-without-argument
-      inputs:
-        parameters:
-          - name: workflowtemplate
-      resource:
-        action: create
-        manifest: |
-          apiVersion: argoproj.io/v1alpha1
-          kind: Workflow
-          metadata:
-            generateName: workflow-of-workflows-1-
-          spec:
-            workflowTemplateRef:
-              name: {{inputs.parameters.workflowtemplate}}
-        successCondition: status.phase == Succeeded
-        failureCondition: status.phase in (Failed, Error)
-
-    - name: resource-with-argument
-      inputs:
-        parameters:
-          - name: workflowtemplate
-          - name: message
-      resource:
-        action: create
-        manifest: |
-          apiVersion: argoproj.io/v1alpha1
-          kind: Workflow
-          metadata:
-            generateName: workflow-of-workflows-2-
-          spec:
-            arguments:
-              parameters:
-              - name: message
-                value: {{inputs.parameters.message}}
-            workflowTemplateRef:
-              name: {{inputs.parameters.workflowtemplate}}
-        successCondition: status.phase == Succeeded
-        failureCondition: status.phase in (Failed, Error)
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-of-workflows/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-pod-security-context/index.html b/workflow-pod-security-context/index.html index bacc94494e65..081ce6d3a4e5 100644 --- a/workflow-pod-security-context/index.html +++ b/workflow-pod-security-context/index.html @@ -1,3933 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Pod Security Context - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Pod Security Context - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Pod Security Context

-

By default, all workflow pods run as root. The Docker executor even requires privileged: true.

-

For other workflow executors, you can run your workflow pods more securely by configuring the security context for your workflow pod.

-

This is likely to be necessary if you have a pod security policy. You probably can't use the Docker executor if you have a pod security policy.

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: security-context-
-spec:
-  securityContext:
-    runAsNonRoot: true
-    runAsUser: 8737 #; any non-root user
-
-

You can configure this globally using workflow defaults.

-
-

It is easy to make a workflow need root unintentionally

-

You may find that user's workflows have been written to require root with seemingly innocuous code. E.g. mkdir /my-dir would require root.

-
-
-

You must use volumes for output artifacts

-

If you use runAsNonRoot - you cannot have output artifacts on base layer (e.g. /tmp). You must use a volume (e.g. empty dir).

-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-pod-security-context/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-rbac/index.html b/workflow-rbac/index.html index 7dec1fef23fd..3b0de4ee0ffc 100644 --- a/workflow-rbac/index.html +++ b/workflow-rbac/index.html @@ -1,3950 +1,11 @@ - - - + - - - - - - - - - - - - Workflow RBAC - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow RBAC - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow RBAC

-

All pods in a workflow run with the service account specified in workflow.spec.serviceAccountName, or if omitted, -the default service account of the workflow's namespace. The amount of access which a workflow needs is dependent on -what the workflow needs to do. For example, if your workflow needs to deploy a resource, then the workflow's service -account will require 'create' privileges on that resource.

-

Warning: We do not recommend using the default service account in production. It is a shared account so may have -permissions added to it you do not want. Instead, create a service account only for your workflow.

-

The minimum for the executor to function:

-

For >= v3.4:

-
apiVersion: rbac.authorization.k8s.io/v1
-kind: Role
-metadata:
-  name: executor
-rules:
-  - apiGroups:
-      - argoproj.io
-    resources:
-      - workflowtaskresults
-    verbs:
-      - create
-      - patch
-
-

For <= v3.3 use.

-
apiVersion: rbac.authorization.k8s.io/v1
-kind: Role
-metadata:
-  name: executor
-rules:
-  - apiGroups:
-      - ""
-    resources:
-      - pods
-    verbs:
-      - get
-      - patch
-
-

Warning: For many organizations, it may not be acceptable to give a workflow the pod patch permission, see #3961

-

If you are not using the emissary, you'll need additional permissions. -See executor for suitable permissions.

- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-rbac/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-restrictions/index.html b/workflow-restrictions/index.html index 3a51d5093aa2..6fec714e8877 100644 --- a/workflow-restrictions/index.html +++ b/workflow-restrictions/index.html @@ -1,4009 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Restrictions - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Restrictions - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

Workflow Restrictions

-
-

v2.9 and after

-
-

Introduction

-

As the administrator of the controller, you may want to limit which types of Workflows your users can run. -Workflow Restrictions allow you to set requirements for all Workflows.

-

Available Restrictions

-
    -
  • templateReferencing: Strict: Only process Workflows using workflowTemplateRef. You can use this to require usage of WorkflowTemplates, disallowing arbitrary Workflow execution.
  • -
  • templateReferencing: Secure: Same as Strict plus enforce that a referenced WorkflowTemplate hasn't changed between operations. If a running Workflow's underlying WorkflowTemplate changes, the Workflow will error out.
  • -
-

Setting Workflow Restrictions

-

You can add workflowRestrictions in the workflow-controller-configmap.

-

For example, to specify that Workflows may only run with workflowTemplateRef:

-
# This file describes the config settings available in the workflow controller configmap
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: workflow-controller-configmap
-data:
-  workflowRestrictions: |
-    templateReferencing: Strict
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-restrictions/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-submitting-workflow/index.html b/workflow-submitting-workflow/index.html index 59c9d31403f6..ce0723fb45f4 100644 --- a/workflow-submitting-workflow/index.html +++ b/workflow-submitting-workflow/index.html @@ -1,3938 +1,11 @@ - - - + - - - - - - - - - - - - One Workflow Submitting Another - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + One Workflow Submitting Another - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - -
-
-
- - - -
-
-
- - -
-
- - - - - - - - -

One Workflow Submitting Another

-
-

v2.8 and after

-
-

If you want one workflow to create another, you can do this using curl. You'll need an access token. Typically the best way is to submit from a workflow template:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: demo-
-spec:
-  entrypoint: main
-  templates:
-    - name: main
-      steps:
-        - - name: a
-            template: create-wf
-    - name: create-wf
-      script:
-        image: curlimages/curl:latest
-        command:
-          - sh
-        source: >
-          curl https://argo-server:2746/api/v1/workflows/argo/submit \
-            -fs \
-            -H "Authorization: Bearer eyJhbGci..." \
-            -d '{"resourceKind": "WorkflowTemplate", "resourceName": "wait", "submitOptions": {"labels": "workflows.argoproj.io/workflow-template=wait"}}'
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-submitting-workflow/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file diff --git a/workflow-templates/index.html b/workflow-templates/index.html index 4fc997b58e6a..56e5cec9de3d 100644 --- a/workflow-templates/index.html +++ b/workflow-templates/index.html @@ -1,4491 +1,11 @@ - - - + - - - - - - - - - - - - Workflow Templates - Argo Workflows - The workflow engine for Kubernetes - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + Workflow Templates - Argo Workflows - The workflow engine for Kubernetes + + - - - - - - - - - - - - - - - - - - -
- - - - Skip to content - - -
-
- -
- - - - - - -
- - - - - - - -
- -
- - - - -
-
- - - -
-
-
- - - - - - -
-
-
- - - - - - -
-
- - - - - - - - -

Workflow Templates

-
-

v2.4 and after

-
-

Introduction

-

WorkflowTemplates are definitions of Workflows that live in your cluster. This allows you to create a library of -frequently-used templates and reuse them either by submitting them directly (v2.7 and after) or by referencing them from -your Workflows.

-

WorkflowTemplate vs template

-

The terms WorkflowTemplate and template have created an unfortunate naming collision and have created some confusion -in the past. However, a quick description should clarify each and their differences.

-
    -
  • A template (lower-case) is a task within a Workflow or (confusingly) a WorkflowTemplate under the field templates. Whenever you define a -Workflow, you must define at least one (but usually more than one) template to run. This template can be of type -container, script, dag, steps, resource, or suspend and can be referenced by an entrypoint or by other -dag, and step templates.
  • -
-

Here is an example of a Workflow with two templates:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: steps-
-spec:
-  entrypoint: hello           # We reference our first "template" here
-
-  templates:
-  - name: hello               # The first "template" in this Workflow, it is referenced by "entrypoint"
-    steps:                    # The type of this "template" is "steps"
-    - - name: hello
-        template: whalesay    # We reference our second "template" here
-        arguments:
-          parameters: [{name: message, value: "hello1"}]
-
-  - name: whalesay             # The second "template" in this Workflow, it is referenced by "hello"
-    inputs:
-      parameters:
-      - name: message
-    container:                # The type of this "template" is "container"
-      image: docker/whalesay
-      command: [cowsay]
-      args: ["{{inputs.parameters.message}}"]
-
-
    -
  • A WorkflowTemplate is a definition of a Workflow that lives in your cluster. Since it is a definition of a Workflow -it also contains templates. These templates can be referenced from within the WorkflowTemplate and from other Workflows -and WorkflowTemplates on your cluster. To see how, please see Referencing Other WorkflowTemplates.
  • -
-

WorkflowTemplate Spec

-
-

v2.7 and after

-
-

In v2.7 and after, all the fields in WorkflowSpec (except for priority that must be configured in a WorkflowSpec itself) are supported for WorkflowTemplates. You can take any existing Workflow you may have and convert it to a WorkflowTemplate by substituting kind: Workflow to kind: WorkflowTemplate.

-
-

v2.4 – 2.6

-
-

WorkflowTemplates in v2.4 - v2.6 are only partial Workflow definitions and only support the templates and -arguments field.

-

This would not be a valid WorkflowTemplate in v2.4 - v2.6 (notice entrypoint field):

-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: workflow-template-submittable
-spec:
-  entrypoint: whalesay-template     # Fields other than "arguments" and "templates" not supported in v2.4 - v2.6
-  arguments:
-    parameters:
-      - name: message
-        value: hello world
-  templates:
-    - name: whalesay-template
-      inputs:
-        parameters:
-          - name: message
-      container:
-        image: docker/whalesay
-        command: [cowsay]
-        args: ["{{inputs.parameters.message}}"]
-
-

However, this would be a valid WorkflowTemplate:

-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: workflow-template-submittable
-spec:
-  arguments:
-    parameters:
-      - name: message
-        value: hello world
-  templates:
-    - name: whalesay-template
-      inputs:
-        parameters:
-          - name: message
-      container:
-        image: docker/whalesay
-        command: [cowsay]
-        args: ["{{inputs.parameters.message}}"]
-
-

Adding labels/annotations to Workflows with workflowMetadata

-
-

2.10.2 and after

-
-

To automatically add labels and/or annotations to Workflows created from WorkflowTemplates, use workflowMetadata.

-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: workflow-template-submittable
-spec:
-  workflowMetadata:
-    labels:
-      example-label: example-value
-
-

Working with parameters

-

When working with parameters in a WorkflowTemplate, please note the following:

-
    -
  • When working with global parameters, you can instantiate your global variables in your Workflow -and then directly reference them in your WorkflowTemplate. Below is a working example:
  • -
-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: hello-world-template-global-arg
-spec:
-  serviceAccountName: argo
-  templates:
-    - name: hello-world
-      container:
-        image: docker/whalesay
-        command: [cowsay]
-        args: ["{{workflow.parameters.global-parameter}}"]
----
-apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: hello-world-wf-global-arg-
-spec:
-  serviceAccountName: argo
-  entrypoint: whalesay
-  arguments:
-    parameters:
-      - name: global-parameter
-        value: hello
-  templates:
-    - name: whalesay
-      steps:
-        - - name: hello-world
-            templateRef:
-              name: hello-world-template-global-arg
-              template: hello-world
-
-
    -
  • When working with local parameters, the values of local parameters must be supplied at the template definition inside -the WorkflowTemplate. Below is a working example:
  • -
-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: hello-world-template-local-arg
-spec:
-  templates:
-    - name: hello-world
-      inputs:
-        parameters:
-          - name: msg
-            value: "hello world"
-      container:
-        image: docker/whalesay
-        command: [cowsay]
-        args: ["{{inputs.parameters.msg}}"]
----
-apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: hello-world-local-arg-
-spec:
-  entrypoint: whalesay
-  templates:
-    - name: whalesay
-      steps:
-        - - name: hello-world
-            templateRef:
-              name: hello-world-template-local-arg
-              template: hello-world
-
-

Referencing other WorkflowTemplates

-

You can reference templates from another WorkflowTemplates (see the difference between the two) using a templateRef field. -Just as how you reference other templates within the same Workflow, you should do so from a steps or dag template.

-

Here is an example from a steps template:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: workflow-template-hello-world-
-spec:
-  entrypoint: whalesay
-  templates:
-  - name: whalesay
-    steps:                              # You should only reference external "templates" in a "steps" or "dag" "template".
-      - - name: call-whalesay-template
-          templateRef:                  # You can reference a "template" from another "WorkflowTemplate" using this field
-            name: workflow-template-1   # This is the name of the "WorkflowTemplate" CRD that contains the "template" you want
-            template: whalesay-template # This is the name of the "template" you want to reference
-          arguments:                    # You can pass in arguments as normal
-            parameters:
-            - name: message
-              value: "hello world"
-
-

You can also do so similarly with a dag template:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: workflow-template-hello-world-
-spec:
-  entrypoint: whalesay
-  templates:
-  - name: whalesay
-    dag:
-      tasks:
-        - name: call-whalesay-template
-          templateRef:
-            name: workflow-template-1
-            template: whalesay-template
-          arguments:
-            parameters:
-            - name: message
-              value: "hello world"
-
-

You should never reference another template directly on a template object (outside of a steps or dag template). -This includes both using template and templateRef. -This behavior is deprecated, no longer supported, and will be removed in a future version.

-

Here is an example of a deprecated reference that should not be used:

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: workflow-template-hello-world-
-spec:
-  entrypoint: whalesay
-  templates:
-  - name: whalesay
-    template:                     # You should NEVER use "template" here. Use it under a "steps" or "dag" template (see above).
-    templateRef:                  # You should NEVER use "templateRef" here. Use it under a "steps" or "dag" template (see above).
-      name: workflow-template-1
-      template: whalesay-template
-    arguments:                    # Arguments here are ignored. Use them under a "steps" or "dag" template (see above).
-      parameters:
-      - name: message
-        value: "hello world"
-
-

The reasoning for deprecating this behavior is that a template is a "definition": it defines inputs and things to be -done once instantiated. With this deprecated behavior, the same template object is allowed to be an "instantiator": -to pass in "live" arguments and reference other templates (those other templates may be "definitions" or "instantiators").

-

This behavior has been problematic and dangerous. It causes confusion and has design inconsistencies.

-
-

2.9 and after

-
-

Create Workflow from WorkflowTemplate Spec

-

You can create Workflow from WorkflowTemplate spec using workflowTemplateRef. If you pass the arguments to created Workflow, it will be merged with workflow template arguments. -Here is an example for referring WorkflowTemplate as Workflow with passing entrypoint and Workflow Arguments to WorkflowTemplate

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: workflow-template-hello-world-
-spec:
-  entrypoint: whalesay-template
-  arguments:
-    parameters:
-      - name: message
-        value: "from workflow"
-  workflowTemplateRef:
-    name: workflow-template-submittable
-
-

Here is an example of a referring WorkflowTemplate as Workflow and using WorkflowTemplates's entrypoint and Workflow Arguments

-
apiVersion: argoproj.io/v1alpha1
-kind: Workflow
-metadata:
-  generateName: workflow-template-hello-world-
-spec:
-  workflowTemplateRef:
-    name: workflow-template-submittable
-
-

Managing WorkflowTemplates

-

CLI

-

You can create some example templates as follows:

-
argo template create https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/templates.yaml
-
-

Then submit a workflow using one of those templates:

-
argo submit https://raw.githubusercontent.com/argoproj/argo-workflows/main/examples/workflow-template/hello-world.yaml
-
-
-

2.7 and after

-
-

Then submit a WorkflowTemplate as a Workflow:

-
argo submit --from workflowtemplate/workflow-template-submittable
-
-

If you need to submit a WorkflowTemplate as a Workflow with parameters:

-
argo submit --from workflowtemplate/workflow-template-submittable -p message=value1
-
-

kubectl

-

Using kubectl apply -f and kubectl get wftmpl

-

GitOps via Argo CD

-

WorkflowTemplate resources can be managed with GitOps by using Argo CD

-

UI

-

WorkflowTemplate resources can also be managed by the UI

-

Users can specify options under enum to enable drop-down list selection when submitting WorkflowTemplates from the UI.

-
apiVersion: argoproj.io/v1alpha1
-kind: WorkflowTemplate
-metadata:
-  name: workflow-template-with-enum-values
-spec:
-  entrypoint: argosay
-  arguments:
-    parameters:
-      - name: message
-        value: one
-        enum:
-          -   one
-          -   two
-          -   three
-  templates:
-    - name: argosay
-      inputs:
-        parameters:
-          - name: message
-            value: '{{workflow.parameters.message}}'
-      container:
-        name: main
-        image: 'argoproj/argosay:v2'
-        command:
-          - /argosay
-        args:
-          - echo
-          - '{{inputs.parameters.message}}'
-
- - - - -

Comments

- - - - -
-
-
- - - - Back to top - - -
- - - -
-
-
-
- - - - - - + +

This page has moved to https://argo-workflows.readthedocs.io/en/latest/workflow-templates/.

+

You should be redirected there automatically. Please click the link above if you are not redirected.

\ No newline at end of file