You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It sounds like you just need to update your UI pod. There were some changes made in #293 that changed how the frontend generated these graphs (in sync with a change to the way the DSL compiled conditional pipelines). The most up-to-date UI shows the following. Let me know if this doesn't seem right:
* Fix bugs with cleanup of deployments and clusters
* Disable cache discovery in cleanup_ci.
* Fixkubeflow#480
* Add a K8s batch job to make it easy to do one off runs of the cleanup script.
* update the playbook.
* Stop using the delete_deployment script; its very outdated and is causing
problems.
Instead lets just issue a delete for the deployment and if that fails to
properly GC everything we can rely on cleanup_ci.py
* We should really be trying to use kfctl to delete things.
* Split up cleanup_deployments into 2 functions one for clusters and one
for deployments.
* Log cleanup ops.
HumairAK
pushed a commit
to red-hat-data-services/data-science-pipelines
that referenced
this issue
Mar 11, 2024
For example:
When users use the 'flip' variable a second time, the generated execution turns out to be:
The text was updated successfully, but these errors were encountered: